Coherent and Consistent Relational Transfer Learning with Auto-encoders Harald Strömfelt1,3 , Luke Dickens2 , Artur d’Avila Garcez3 and Alessandra Russo1 1 Imperial College London, Exhibition Rd, South Kensington, London SW7 2BX, UK 2 University College London, Gower St, London WC1E 6BT, UK 3 City, University of London, Northampton Square, London EC1V 0HB, UK Abstract Human defined concepts are inherently transferable, but it is not clear under what conditions they can be modelled effectively by non-symbolic artificial learners. This paper argues that for a transferable concept to be learned, the system of relations that define it must be coherent across domains and properties. That is, they should be consistent with respect to relational constraints, and this consistency must extend beyond the representations encountered in the source domain. Further, where relations are modelled by differentiable functions, their gradients must conform – the functions must at times move together to preserve consistency. We propose a Partial Relation Transfer (PRT) task which exposes how well relation-decoders model these properties, and exemplify this with ordinality prediction transfer task, including a new data set for the transfer domain. We evaluate this on existing relation-decoder models, as well as a novel model designed around the principles of consistency and gradient conformity. Results show that consistency across broad regions of input space indicates good transfer performance, and that good gradient conformity facilitates consistency. Keywords Representation Learning, Relation Learning, Variational AutoEncoders, Concept Learning 1. Introduction In many situations, concepts that pertain to one set of data can also be relevant to another [1, 2]. Take, for instance, the general concept of ordinality, whose semantics are defined by relations: isSuccessor, isPredecessor, isGreater, isLess and isEqual; together with their constraints. Successfully capturing this concept involves learning the corresponding relations such that they maintain data set and property independence, with no retraining. This is to say that they have been abstracted from the specific property and act instead as a generic set of characterizing relations for the semantics of ordinality. For this, we argue that the relations must be consistent with their expected constraints and coherent across ordinal properties spanning different data sets, which means their consistency is maintained regardless of data set or particular ordinal property. As a concrete example, suppose that we have successfully learned to order images of numbers by their abstract digit identity, and are presented with a new data set containing images of individual stacks of blocks. Suppose then that we wish to obtain an ordering over them, such Submitted to the 15th International Workshop On Neural-Symbolic Learning and Reasoning (NeSy ‘21) Envelope-Open h.stromfelt17@imperial.ac.uk (H. Strömfelt) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) that we can compare arbitrary data instances using the above relations. Provided that the learned relations are consistent with their expected constraints, it should be possible to obtain an encoding that establishes each successor, via our isSuccessor relation, and immediately be able to compare data instance over the remaining relations. Following this logic, the primary purpose of this paper is to evaluate under which conditions a relation-decoder model is able to obtain the ordinality concept. We do this by taking a set of popular relation-decoder models, including a proposed Dynamic Comparator (DC) model, and assess 1. their consistencies as measured in the source data set, and 2. their ability to perform a Partial Relation Transfer (PRT) task to a novel target data set, which measures the robustness of their consistencies across domains. The evaluation takes place in two steps. In the first, we learn the above set of ordinality relations by ordering MNIST images based on their abstract digit identity and report each model’s consistency profile. In the next step, we take the now pretrained isSuccessor relation-decoder and apply it to a proposed BlockStacks data set, which consists of images of multicolored block stacks. Each stack contains a single red block at various heights, which we use to test the degree to which ordering the encodings of each block stack image, subject to the pretrained isSuccessor relation, leads to transferred prediction accuracy across the remaining relations. In summary, the contributions of our work are: • We devise an experimental setup that can expose the degree to which learning relations leads to concept abstraction, together with a new BlockStacks data set that presents a challenging ordering task based on a complex property. • We introduce a set of data set agnostic characteristic measures for relation-decoders which can help determine their ability to perform PRT. • We present a Dynamic Comparator model that achieves excellent PRT. • Finally, we present a comprehensive analysis of model characteristics against correspond- ing PRT performance, for a set of popular relation-decoders. The rest of the paper is presented as follows. Section 2 firstly positions our paper with respect to related work. Section 3 formalises the PRT task and outlines the architecture we employ to solve it, including the proposed DC relation-decoder model. We then define how we compute model consistency and gradient-conformity in Section 4. Finally, we provide results and analysis in Section 5, with concluding remarks in Section 6. 2. Related Work Relational representations play a prominent role in Knowledge Graph Embedding (KGE), wherein sets of relation-decoders are jointly learned, through triplet link prediction, in or- der to obtain a semantic latent factor representation for entities [3, 4, 5, 6, 7, 8, 9, 10, 11]. In principle any KGE link prediction model can be employed in this work, but we focus on those that assume a Euclidean representation space and do not require any additional per-triplet engineering. Although KGE methods typically do not use a shared auto-encoder as we do in this paper, Schlichtkrull et al. [12] did adopt an auto-encoding framework, where a graph neural network is used as the encoder, however they did not work with visual data and the model was not applied to transfer. Disentanglement, which also aims to learn semantic representations for data is of relevance to this work [13, 2], wherein multiple methods have been proposed, for example using Generative Adversarial Networks [14] and VAEs [15, 1, 16, 17, 18, 19, 20]. Of particular relevance to our work are investigations looking at the transferability of disentangled representations [21, 22, 23], but these did not include relation learning. A bridge between relation learning and disentanglement, wherein relation-decoders are employed as a semi- supervision to VAEs, can be found in [24, 25, 26]. Lastly, we note that our experimental setup is most remnant of domain adaptation [27]. To the best of our knowledge, no work has compared relation-decoders in their ability to abstract concepts, as measured by their consistency and its transfer across domains. 3. The Partial Relation Transfer Task and Model Partial Relation Transfer (PRT) is at its core a domain adaptation task [27], wherein we have a source and target data domain, consisting of a set of images, 𝑋𝑠 and 𝑋𝑡 , respectively, and a set of shared relation prediction tasks, ℛ = {𝑟1 , … , 𝑟𝑛 }. We approximate each relation using a relation-decoder 𝜙𝑟𝑀 ∶ 𝑍 × 𝑍 → [0, 1], where 𝑍 denotes a latent space that contains all image encodings 𝑧𝑖 ∈ 𝑍. The superscript 𝑀 denotes a specific relation-decoder model, as we test multiple variants. To obtain embeddings we use a domain-specific auto-encoder, consisting of 𝑠/𝑡 𝑠/𝑡 an encoder 𝜓𝑒𝑛𝑐 ∶ 𝑋 → 𝑍 and decoder 𝜓𝑑𝑒𝑐 ∶ 𝑍 → 𝑋, which helps to minimise information loss through reconstruction of the input image1 . The evaluation takes place as a two-step procedure. In the first, all relation decoders are trained in the source domain, as a semi-supervision to the auto-encoder, using available labels, 𝑦 𝑠 ∈ ℝ|ℛ|×|𝑋𝑠 |×|𝑋𝑠 | , that specify whether a relation 𝑟 ∈ ℛ holds between image 𝑥𝑖 , 𝑥𝑗 ∈ 𝑋𝑠 . Here, | ⋅ | denotes the cardinality of the operand set, but in practice we only use a small fraction of the available labels. In the second evaluation step, we initialise a new auto-encoder to be applied to the target dataset and use a subset of the pretrained relation-decoders, with labels 𝑦 𝑡 ∈ ℝ|ℛ|×|𝑋𝑡 |×|𝑋𝑡 | , to act as fixed-parameter ‘guides’ for the encoder. To obtain informative data encodings, we use a Variational AutoEncoder (VAE), specifically the 𝛽-VAE, given its simplicity and demonstrated ability to separate distinct factors in the latent representation [1, 15, 28]. The 𝛽-VAE achieves this by optimising the ELBO objective, which for the purposes of this paper we express as a loss over both encoder and decoder: 𝑠/𝑡 𝑠/𝑡 𝑠/𝑡 𝐸𝐿𝐵𝑂 = ℒ (𝜓 , 𝜓 ) + 𝛽ℒ (𝜓 , 𝒩 (0, 𝟙)), ℒ𝛽-VAE 𝑒𝑛𝑐 𝑑𝑒𝑐 𝑒𝑛𝑐 (1) where an additional 𝛽 scalar hyperparameter is used to influence disentanglement through stronger distribution matching pressure to an isotropic zero-mean Gaussian prior, 𝒩 (0, 𝟙). When 𝛽 = 1 we obtain the original VAE objective [28]. We provide the full ELBO loss, with a detailed explanation, in Appendix B. Each experiment involves taking embeddings from a corresponding encoder and passing them through to sets of relation-decoders (either the full set in the case in the source domain, or only a subset in the target domain). We can treat each relation-decoder as producing a prediction 𝑦𝑟𝑖𝑗 ̂ for whether relation 𝑟 holds between data 1 Further analysis on the performance of BlockStacks embeddings for domain-dependent task can be found in Appendix E Figure 1: Depiction of the architecture we use for PRT. In this diagram, we show how the initial relation 𝑡 learning is performed on the source MNIST dataset. Moving to the target domain involves using 𝜓𝑒𝑛𝑐/𝑑𝑒𝑐 and fixing parameters for each included 𝜙𝑟 relation-decoder. instances 𝑖 and 𝑗 [5]. Using the ground truth 𝑦𝑟𝑖𝑗 , we can then compute the loss over all relation- decoders, ℒ 𝑅𝑒𝑙𝐷𝑒𝑐 , as the binary cross-entropy of prevision versus ground truth. This gives us the final joint objective between VAE and relation-decoders: 𝐸𝐿𝐵𝑂 − 𝜆 𝔼 ℒ 𝑗𝑜𝑖𝑛𝑡 = ℒ𝛽-VAE 𝑟,𝑦𝑟𝑖𝑗 ,𝑧𝑖 ,𝑧𝑗 [𝑦𝑟𝑖𝑗 log(𝑦𝑟𝑖𝑗 ̂ ) + (1 − 𝑦𝑟𝑖𝑗 ) log(1 − 𝑦𝑟𝑖𝑗 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟̂ )], (2) ℒ 𝑅𝑒𝑙𝐷𝑒𝑐 where 𝜆 is a scalar weighting parameter. 3.1. Dynamic Comparator In our analysis, we include a proposed low-complexity, but nonetheless expressive, “Dynamic Comparator” (DC) model, which is designed to model systems of relations, whilst encouraging desirable properties for PRT. The overall DC model is composed of two modes, a distance-based † measure, 𝜙𝑟 , that can compute how close the vector difference between two inputs is to a ‡ positive or negative valued reference vector, and a step-like function, 𝜙𝑟 , that determines the sign of the difference between two points, optionally with an offset. The overall DC model is given by2 : 2 𝜙𝑟𝐷𝐶 (𝑧𝑖 , 𝑧𝑗 ) = 𝑎0 ⋅ 𝜎⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⊤ 0 (𝜂0 (‖𝑢 ⊙ (𝑧𝑖 − 𝑧𝑗 + 𝑏† )‖2 )) +𝑎1 ⋅ 𝜎 1 ((𝜂1 ⋅ 𝑢 (𝑧𝑖 − 𝑧𝑗 + 𝑏‡ ))) . ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ (3) † ‡ 𝜙𝑟 𝜙𝑟 2 In the main text we report results for this DC model, but we can use any function that has the required characteristics for 𝜙 † and 𝜙 ‡ . We include results for other versions in Appendix D. where 𝑎 = S o f t m a x (𝐴) ∈ ℝ2 is an attention weighting between the two modes, and ensures that 𝜙 𝐷𝐶 is bound to [0,1]. 𝜎0 , 𝜎1 are an exponential and sigmoid function, respectively; 𝑢 = Softmax(𝑈) ∈ ℝ𝑚 is an attention mask which is applied to 𝑚-dimensional latent embeddings; 𝑏† , 𝑏‡ ∈ ℝ𝑚 are learnable bias terms that enables an offset to each mode; and 𝜂0 ∈ ℝ+ are non-negative and 𝜂1 ∈ ℝ any-valued scalar terms, respectively. Lastly, ⊙ denotes the Hadamard product (elementwise multiplication) and ‖ ⋅ ‖2 is the 𝐿2-norm. Due to a convergence issue when using a pretrained DC with fixed parameters, we needed to use a flexible fitting procedure in which we enable the DC parameters to train in the target domain, but with the additional loss term ‖𝜌 ∗ − 𝜌‖, between pretrained 𝜌 ∗ and untrained parameters 𝜌, respectively. In all cases we evaluated the final parameter values in the target domain and found them to be approximately equivalent to the 𝜌 ∗ . We did not apply this method to the other models as they were all able to fit the isSuccessor relation in the target domain. 4. Measuring relation-decoder characteristics In this section we describe a series of measures that we use to understand more about the intrinsic characteristics of each relation-decoder, which together help identify the behaviour of each relation-decoder model and provide insight regarding their respective PRT performance. For any system of relations, we can write down a truth-table that defines the valid truth- states that they may collectively take, which we expect our relation-decoders to model. For example, we know that any time isGreater is true, isLess must not be. By assuming that each relation-decoder output is pairwise conditionally independent given 𝑧𝑖 , 𝑧𝑗 , for instance, 𝑝(isGreater, isLess|𝑧𝑖 , 𝑧𝑗 ) = 𝑝(isGreater|𝑧𝑖 , 𝑧𝑗 )𝑝(isLess|𝑧𝑖 , 𝑧𝑗 ), we can produce a probability statement for whether the relations are consistent with valid entries to the truth-table. Taking 𝑟1 = isGreater and 𝑟2 = isLess as our entire system of relations, we can produce the following truth-table conversion, where invalid entries are omitted: 𝑟1 (𝑥𝑖 , 𝑥𝑗 ) 𝑟2 (𝑥𝑖 , 𝑥𝑗 ) ℱ (𝑟1 , 𝑟2 ) ℱ (𝑟1 , 𝑟2 ) = ∀𝑥𝑖 , 𝑥𝑗 ((𝑟1 (𝑥𝑖 , 𝑥𝑗 ) ∧ 𝑟2 (𝑥𝑖 , 𝑥𝑗 )) 𝑇 𝐹 𝑇 ⟹ ∨(¬𝑟1 (𝑥𝑖 , 𝑥𝑗 ) ∧ 𝑟2 (𝑥𝑖 , 𝑥𝑗 )) (4) 𝐹 𝑇 𝑇 ∨(¬𝑟1 (𝑥𝑖 , 𝑥𝑗 ) ∧ ¬𝑟2 (𝑥𝑖 , 𝑥𝑗 ))) 𝐹 𝐹 𝑇 which, using our relation-decoders for each relation and with 𝑧𝑖,𝑗 = 𝜓𝑒𝑛𝑐 (𝑥𝑖,𝑗 ) and ¬𝜙𝑟 (𝑧𝑖 , 𝑧𝑗 ) = 1 − 𝜙𝑟 (𝑧𝑖 , 𝑧𝑗 ), we express the probability of ℱ being true as: 𝑝(ℱ |𝑧𝑖 , 𝑧𝑗 ) = ((𝜙𝑟1 (𝑧𝑖 , 𝑧𝑗 ) ⋅ 𝜙𝑟2 (𝑧𝑖 , 𝑧𝑗 )) + ((1 − 𝜙𝑟1 (𝑧𝑖 , 𝑧𝑗 )) ⋅ 𝜙𝑟2 (𝑧𝑖 , 𝑧𝑗 )) + ((1 − 𝜙𝑟1 (𝑧𝑖 , 𝑧𝑗 )) ⋅ (1 − 𝜙𝑟2 (𝑧𝑖 , 𝑧𝑗 ))). (5) Finally, since ℱ should hold for all input combinations, we heavily penalise violations by using a binary cross-entropy loss between ℱ and the expected outcome: 1 𝐻𝑇 𝑟𝑢𝑒 (𝑝(ℱ )) = − ∑ 1 ⋅ log 𝑝(ℱ |𝑧𝑖 , 𝑧𝑗 ), (6) 𝑁 𝑧 ,𝑧 ∈𝑍 𝑖 𝑗 where 𝑍 is the latent space, as we can compute this score for any samples from this space3 and 𝑁 is a normalising constant, equal to the number of 𝑧𝑖 and 𝑧𝑗 sample pairs used in the calculation. We refer to this measure as Con-A referring to the fact that we use it to measure consistency across multiple relations. To provide a deeper understanding about how relation-decoders collectively interact with their inputs, we use a gradient evaluation to see whether models respond similarly to changes in their input. For a set of relations, we define the gradient-conformity (GC) of relation 𝑟𝑖 against all others by the following cosine-similarity: 𝑑𝑖𝑇 𝑑𝑗 𝑑𝜙𝑟𝑖 𝑑𝜙𝑟𝑗 𝐺𝐶 = | | where 𝑑𝑖 = 𝑐| and 𝑑𝑗 = | , ∀𝑖 ≠ 𝑗 (7) ||𝑑𝑖 ||2 ||𝑑𝑗 ||2 𝑑𝑧 𝑧 𝑐 =𝑧𝑠𝑐 𝑑𝑧 𝑐 𝑧 𝑐 =𝑧 𝑐 𝑠 where | ⋅ | denotes the absolute of the operand and 𝑧 𝑐 is the concatenation of each relation- decoder’s inputs, with gradients evaluated at reference inputs 𝑧𝑠𝑐 . GC will be 1 if gradients are aligned and zero if orthogonal4 . 5. Results This section presents results for the PRT task on a range of relation-decoder models. In the source domain, we learn a system of binary relations: ℛ ={isSuccessor (S), isPredecessor (P), isGreater (G), isEqual (E), isLess (L)}, on digits represented in MNIST images, alongside a 𝛽-VAE. In the target domain, we take the pretrained S relation as a fixed-parameter guide for a new 𝛽-VAE applied to BlockStacks images (see Appendix A for BlockStacks image examples), and then evaluate PRT accuracy on the held-out G, E, L and P relations. Relation-decoder models compared here are: TransR [29], HolE [30], NTN [3], our proposed DC and a basic neural- network baseline, NN. NN is a simple four-layer (𝑙in , 𝑙1 , 𝑙2 , 𝑙out ) neural-network with layer sizes 𝑙in = 2𝑑𝑧 , 𝑙1 = 2𝑑𝑧 and 𝑙2 = 𝑑𝑧 , with ReLU activations. The final output layer 𝑙out is a single value passed through a sigmoid function, to bound the output to [0,1]. Further model details are provided in Appendix C. We vary 𝛽 only in the source domain, ranging across values {1, 4, 8, 12}, but fix it in the target domain. 𝜆 is fixed in both domains (see Appendix C.3 for further details on hyperparameter settings). For Con-A and GC measures, we produce encodings for three data splits: data- embeddings, where all inputs are encodings of a domain’s test data; interpolation, where we obtain an empirical mean and variance for the domain’s data-embeddings and sample from a corresponding Gaussian distribution; and extrapolation, where we sample from regions strictly outside the data-embeddings region. Figure 2-top provides relation-decoder prediction accuracy in both the source MNIST (left), and target BlockStacks (right), domains. Key observations are that DC produces excellent PRT performance, whilst NN, NTN and HolE all see some degradation from their source accuracies. TransR seems to maintain a similar accuracy profile. We include 𝛽’s impact on these performances in Figure 2-bottom. Barring DC which has little discernible change in either 3 in practice as we cannot include every encoding combination, we provide an estimate. 4 We can evaluate against this measure for arbitrary samples from 𝑍. Figure 2: [Top] Relation-decoder prediction accuracy per relation and model, in the source (left) and target domains. Relations are abbreviated on the 𝑥-axis by { S: isSuccessor, P: isPredecessor, E: isEqual, 𝑡 G: isGreater, L: isLess }, with a red highlight identifying which relation is included as a guide for 𝜓𝑒𝑛𝑐 . [Bottom] 𝛽 impact profiles for each relation-decoder model, aggregated across all relations in the source (left) domain and aggregated only for held-out relations in the target (right) domain. In all cases, higher values are better. Figure 3: [Top] Con-A values for each relation-decoder model, referenced to source (left) and target (right) domains (lower values better). [Bottom] GC values for each relation decoder (higher values better). In all plots, darker color shades denote higher values of 𝛽, corresponding to greater disentangle- ment pressure from the 𝛽-VAE. In top-left and bottom plots, blue, green and red groups show results for data-embeddings, interpolation and extrapolation embeddings respectively (see main text for details). domain, PRT performance is significantly impacted by 𝛽 in all models, but has little effect in the source domain. Additionally, TransR has a strong positive correlation with 𝛽, whereas NN, NTN and HolE produce the best PRT performance with intermediate disentanglement pressure. To interrogate further how 𝛽 affects each model, we provide: (Figure 3-top) mean relation Con-A referenced to both source (left) and target (right) domain embeddings; and (Figure 3-bottom) source domain referenced GC measures for each model. In the left and bottom plots, blue (left group), green (middle group) and red (right group) show results for the data-embeddings, interpolation and extrapolation regions of latent space, in respective order. From the source domain Con-A results, we note that DC shows excellent consistency across relations in all regions. Most other models have worse interpolation and extrapolation consistency. Increasing 𝛽 appears to give some improvement for all but HolE, but there are indications that this trend does not persist into the largest 𝛽 = 12 value. Interestingly, Con-A values for target data-embeddings (right) are notably worse than for source data-embeddings, with values closer to those for interpolation or extrapolation in the source domain. For GC, DC performance is close to 1 for all 𝛽 with no discernible change. All other models show a weaker GC with positive correlation between GC and 𝛽. TransR and NN achieve significantly higher GC than NTN and HolE. 5.1. Key Experimental Results 5.1.1. Does good source task accuracy lead to successful PRT? Since we transfer pretrained models from source to target domain and ensure that the target 𝑡 , fits its encodings to S, we might expect that relation-decoding performance will encoder, 𝜓𝑒𝑛𝑐 be the same in both domains. However, despite DC, NN, NTN and HolE all performing close to 100% accuracy, and TransR achieving above 80%, across all relations, and with all relations able to achieve similar prediction accuracy (or better in the case of DC) on the guide relation S, PRT performance varies significantly across models. It is firstly evident that DC is successful at PRT, sustaining approximately 100% accuracy across all held-out relations. NN achieves mostly good performance, with greater degradation across P and E relations. Although HolE and NTN both achieve good PRT for P, there is increasing degradation across E and G, L relations. TransR is able to achieve strong relative performance where PRT accuracy per relation is comparable to what was possible in the source domain. These results indicate that source accuracy alone is not enough to determine whether models will be successful at PRT. 5.1.2. How does 𝛽 affect Con-A and GC and how does this impact model coherence? To provide an overview of how increased disentanglement pressure affects each model we can firstly compare how 𝛽 affects model performance in both source and target domain. Figure 2-bottom demonstrates that, although relation prediction accuracies for most models either do not respond, or respond negatively, to increases in 𝛽 in the source domain, their PRT behaviour differs significantly across models: DC shows no discernible change, whilst NN, NTN and HolE all show a parabolic response with a maximum PRT around 𝛽 = 8; TransR shows a general positive correlation but with diminishing returns above 𝛽 = 8. To gain further insight into the role of disentanglement pressure, it is necessary to look at how each model’s intrinsic behaviour responds to 𝛽 changes. First, we attempt to expose the relationship between 𝛽 and consistency and whether this has any effect on PRT performance. By Figure 3-top, DC clearly outperforms all other models on Con-A and this coincides with better PRT performance. The next best performing model on Con-A in the source domain is also the next best on PRT performance. In most cases Con-A degrades for all models when moving from data-embeddings to interpolation and extrapolation, but the degree of degradation changes depending on the model. Interestingly, across all models, their target Con-A is notably close to that of interpolation or extrapolation in the source domain analysis. This suggests that guiding 𝜓𝑒𝑛𝑐 𝑡 to fit relation S produces data embeddings that lie in the interpolation or extrapolation regions with respect to MNIST embeddings. This suggests that a relation-decoder model’s ability to retain consistency over regions of latent space beyond where MNIST embeddings are found leads to improved PRT. These findings provide compelling evidence in support of our claim that consistency across relations is important for PRT performance. Secondly, we examine how gradient-conformity affects PRT performance. To achieve suc- cessful PRT, fitting the target encoder to a single pretrained relation should lead to embeddings that are structured correctly with respect to the other pretrained relations. For this to be possible there must be a degree of conformity between how each model computes its system of relations. As an extreme case, suppose we have a two-dimensional latent representation, with two relations that are each calculated using entirely different dimensions of latent space. By fitting an encoder to one of these relations, there is no guarantee that the latent dimension, that the other relation requires, receives the necessary guidance. DC shows excellent and stable GC values (near 1) across all conditions. This is by design as the use of masks per relation ensures that if masks match for any two relations, then their gradients will be either parallel or anti-parallel. Excluding HolE, all remaining models show a positive correlation between GC and 𝛽, and it appears that models with either higher GC values, or 𝛽 response, typically perform better at PRT. Together this provides tentative evidence to suggest that GC is important to model coherence, as measured by their PRT performance. It is possible that we do not see a monotonic benefit of GC against PRT, due to no further extrapolation or interpolation Con-A gains with 𝛽 > 8. 6. Conclusion We provide a comprehensive analysis of relation-decoder characteristics when learning the system of relations that together define the semantics of a concept. We then compare these characteristics with a Partial Relation Transfer task setting, which determines whether, given logical constraints between relations, fitting embeddings to one relation-decoder leads to embeddings that satisfy all other relations in terms of their logical consistency and accuracy. Our results demonstrate that model consistency, and possibly gradient-conformity, across different regions of input space together determine whether a set of relation-decoders have learned a consistent and coherent notion of a given concept, in this case ordinality. These measures make it possible to check whether a set of relation-decoders have indeed learned a transferable concept, or if they are limited to a single data domain and property. References [1] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, A. Lerchner, beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, in: 5th International Conference on Learning Representations, {ICLR}, Toulon, France, 2017. [2] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, A. Lerchner, Towards a Definition of Disentangled Representations, arXiv preprint arXiv:1812.02230 (2018). URL: http://arxiv.org/abs/1812.02230. doi:a r X i v : 1 8 1 2 . 0 2 2 3 0 v 1 . a r X i v : 1 8 1 2 . 0 2 2 3 0 . [3] R. Socher, D. Chen, C. Manning, D. Chen, A. Ng, Reasoning With Neural Tensor Networks for Knowledge Base Completion, in: Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems, 2013, pp. 926–934. arXiv:arXiv:1301.3618v2. [4] T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, G. Bouchard, Complex Embeddings for Simple Link Prediction, in: Proceedings of the 33nd International Conference on Machine Learning, {ICML}, New York, NY, USA, 2016, pp. 2071–2080. a r X i v : 1 6 0 6 . 0 6 3 5 7 . [5] T. Trouillon, É. Gaussier, C. R. Dance, G. Bouchard, On inductive abilities of latent factor models for relational learning, Journal of Artificial Intelligence Research 64 (2019) 21–53. doi:1 0 . 1 6 1 3 / j a i r . 1 . 1 1 3 0 5 . a r X i v : 1 7 0 9 . 0 5 6 6 6 . [6] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, O. Yakhnenko, Translating Embeddings for Modeling Multi-relational Data, in: C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahra- mani, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems, Curran Associates, Inc., Lake Tahoe, USA, 2013, pp. 2787–2795. [7] M. Nickel, K. Murphy, V. Tresp, E. Gabrilovich, A review of relational machine learning for knowledge graphs, Proceedings of the IEEE 104 (2016) 11–33. doi:1 0 . 1 1 0 9 / J P R O C . 2 0 1 5 . 2483592. arXiv:1503.00759. [8] Q. Wang, Z. Mao, B. Wang, L. Guo, Knowledge graph embedding: A survey of approaches and applications, IEEE Transactions on Knowledge and Data Engineering 29 (2017) 2724—-2743. doi:1 0 . 1 1 0 9 / T K D E . 2 0 1 7 . 2 7 5 4 4 9 9 . [9] Y. Dai, S. Wang, N. N. Xiong, W. Guo, A Survey on Knowledge Graph Embedding: Approaches, Applications and Benchmarks, Electronics 9 (2020) 1–29. doi:1 0 . 3 3 9 0 / electronics9050750. [10] S. M. Kazemi, D. Poole, Simple embedding for link prediction in knowledge graphs, Advances in Neural Information Processing Systems 2018-December (2018) 4284–4295. arXiv:1802.04868. [11] R. Abboud, İ. İ. Ceylan, T. Lukasiewicz, T. Salvatori, Boxe: A box embedding model for knowledge base completion, in: H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, H. Lin (Eds.), Advances in Neural Information Processing Systems 33: An- nual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, De- cember 6-12, 2020, virtual, 2020. URL: https://proceedings.neurips.cc/paper/2020/hash/ 6dbbe6abe5f14af882ff977fc3f35501-Abstract.html. [12] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, M. Welling, Modeling Relational Data with Graph Convolutional Networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor- matics) 10843 LNCS (2018) 593–607. doi:1 0 . 1 0 0 7 / 9 7 8 - 3 - 3 1 9 - 9 3 4 1 7 - 4 _ 3 8 . a r X i v : 1 7 0 3 . 0 6 1 0 3 . [13] Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspec- tives, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (2013) 1798–1828. doi:1 0 . 1 1 0 9 / T P A M I . 2 0 1 3 . 5 0 . a r X i v : 1 2 0 6 . 5 5 3 8 . [14] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, P. Abbeel, Infogan: Interpretable representation learning by information maximizing generative adversarial nets, in: D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, R. Garnett (Eds.), Advances in Neural Informa- tion Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, 2016, pp. 2172–2180. URL: https://proceedings. neurips.cc/paper/2016/hash/7c9d0b1f96aebd7b5eca8c3edaa19ebb-Abstract.html. [15] C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, A. Lerchner, Understanding disentangling in 𝛽-VAE, in: Advances in Neural Information Process- ing Systems 30, Nips, Long Beach, CA, USA, 2017. URL: http://arxiv.org/abs/1804.03599. arXiv:1804.03599. [16] R. T. Q. Chen, X. Li, R. B. Grosse, D. Duvenaud, Isolating Sources of Disentanglement in Variational Autoencoders, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, 2018, pp. 2615—-2625. a r X i v : 1 8 0 2 . 0 4 9 4 2 . [17] K. Ridgeway, M. C. Mozer, Learning Deep Disentangled Embeddings With the F-Statistic Loss, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, 2018, pp. 185—-194. [18] C. Eastwood, C. K. I. Williams, A framework for the quantitative evaluation of disentangled representations, in: 6th International Conference on Learning Representations, {ICLR}, Vancouver, BC, Canada, 2018. [19] A. Kumar, P. Sattigeri, A. Balakrishnan, Variational inference of disentangled latent concepts from unlabeled observations, in: 6th International Conference on Learning Representations, {ICLR}, Vancouver, BC, Canada, 2018. a r X i v : 1 7 1 1 . 0 0 8 4 8 . [20] F. Locatello, S. Bauer, M. Lucic, G. R{\”{a}}tsch, S. Gelly, B. Sch{\”{o}}lkopf, O. Bachem, Chal- lenging Common Assumptions in the Unsupervised Learning of Disentangled Representa- tions, in: Proceedings of the 36th International Conference on Machine Learning,{ICML}, Long Beach, California, USA, 2019, pp. 4114—-4124. a r X i v : a r X i v : 1 8 1 1 . 1 2 3 5 9 v 4 . [21] F. Locatello, B. Poole, G. Rätsch, B. Schölkopf, O. Bachem, M. Tschannen, Weakly- Supervised Disentanglement Without Compromises, CoRR abs/2002.0 (2020). arXiv:2002.02886. [22] X. Steenbrugge, S. Leroux, T. Verbelen, B. Dhoedt, Improving Generalization for Abstract Reasoning Tasks Using Disentangled Feature Representations, in: Neural Information Processing Systems (NeurIPS) Workshop on Relational Representation Learning, Montreal, Canada, 2018. doi:h t t p : / / a r x i v . o r g / a b s / 1 8 1 1 . 0 4 7 8 4 . a r X i v : a r X i v : 1 8 1 1 . 0 4 7 8 4 v 1 . [23] S. van Steenkiste, F. Locatello, J. Schmidhuber, O. Bachem, Are Disentangled Represen- tations Helpful for Abstract Visual Reasoning?, in: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 2019, pp. 14222—-14235. a r X i v : 1 9 0 5 . 1 2 5 0 6 . [24] T. Karaletsos, S. Belongie, G. Rätsch, When crowds hold privileges: Bayesian unsupervised representation learning with oracle constraints, in: 4th International Conference on Learning Representations, {ICLR}, San Juan, Puerto Rico, 2016, pp. 1–16. a r X i v : 1 5 0 6 . 0 5 0 1 1 . [25] J. Chen, K. Batmanghelich, Weakly Supervised Disentanglement by Pairwise Similarities, in: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, AAAI, New York, NY, USA, 2020. a r X i v : 1 9 0 6 . 0 1 0 4 4 . [26] J. Chen, K. Batmanghelich, Robust ordinal VAE: employing noisy pairwise comparisons for disentanglement, CoRR abs/1910.05898 (2019). URL: http://arxiv.org/abs/1910.05898. arXiv:1910.05898. [27] I. Redko, A. Habrard, E. Morvant, M. Sebban, Y. Bennani, Advances in Domain Adaptation Theory, Elsevier, 2019. [28] D. P. Kingma, M. Welling, Auto-Encoding Variational Bayes, in: Proceedings of the 2nd International Conference on Learning Representations, Banff, Alberta, Canada, 2014. doi:1 0 . 1 0 5 1 / 0 0 0 4 - 6 3 6 1 / 2 0 1 5 2 7 3 2 9 . a r X i v : 1 3 1 2 . 6 1 1 4 . [29] Y. Lin, Z. Liu, M. Sun, Y. Liu, X. Zhu, Learning entity and relation embeddings for knowledge graph completion, in: B. Bonet, S. Koenig (Eds.), Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, AAAI Press, 2015, pp. 2181–2187. URL: http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/ 9571. [30] M. Nickel, L. Rosasco, T. A. Poggio, Holographic embeddings of knowledge graphs, in: D. Schuurmans, M. P. Wellman (Eds.), Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, AAAI Press, 2016, pp. 1955–1961. URL: http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12484. [31] M. Asai, Photo-Realistic Blocksworld Dataset, arXiv preprint arXiv:1812.01818 (2018). [32] I. Donadello, L. Serafini, A. d’Avila Garcez, Logic Tensor Networks for Semantic Image Interpretation, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017, pp. 1596—-1602. a r X i v : 1 7 0 5 . 0 8 9 6 8 . [33] L. Serafini, A. D. Garcez, Logic tensor networks: Deep learning and logical reason- ing from data and knowledge, in: Proceedings of the 11th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy’16) co-located with the Joint Multi- Conference on Human-Level Artificial Intelligence {(HLAI} 2016), New York, NY, USA, 2016. a r X i v : 1 6 0 6 . 0 4 4 2 2 . [34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825–2830. A. BlockStacks dataset description The BlockStacks dataset consists of 12,000 images (200×200 pixels but resized in code to 128×128) of individual block stacks, of varying height (between 1-10 blocks), block colors (uniformly sampled from options: { gray, blue, green, brown, purple, cyan, yellow} ) and position (uniformly sampled from 𝑥, 𝑦 range (-3,-3) to (3,3)), but with the requirement that each instance consists of a single red block at a random height (see Figure 4 for example images). These were rendered using the CLEVR rendering agent with the help of code from [31]. The dataset is divided into 9000:1500:1500 train, validation and test splits. B. Explanation of the 𝛽-VAE The VAE is derived by introducing an approximate posterior 𝑞𝛼 (𝑍|𝑋), from which a lower bound (commonly referred to as the Evidence LOwer Bound (ELBO)) on the true marginal log 𝑝𝜃 (𝑋) can be obtained by using Jensen’s inequality [28]. The VAE maximises the log-probability by Figure 4: Example of two BlockStacks data set images. maximising this lower bound, given by: 𝐸𝐿𝐵𝑂 = 𝔼 ℒ𝛽-VAE 𝑞𝛼 (𝑍|𝑋) [log 𝑝𝜃 (𝑋|𝑍)] − 𝛽𝐷𝐾 𝐿 (𝑞𝛼 (𝑍|𝑋)‖𝑝𝜃 (𝑍)), (8) where 𝑞𝛼 (𝑍|𝑋) is the approximate posterior, typically modelled as a neural network encoder with parameters 𝛼. Similarly 𝑝𝜃 (𝑋|𝑍) is modelled as a decoder with parameters 𝜃 and is calculated as a Monte Carlo estimation. A reparameterization trick is used to enable differentiation through an otherwise undifferentiable sampling from 𝑞𝛼 (𝑍|𝑋) (see [28]). In the 𝛽-VAE [1, 15], an additional 𝛽 scalar hyperparameter was added as it was found to influence disentanglement through stronger distribution matching pressure with respect to the prior 𝑝𝜃 (𝑍), where this prior is typically set to an isotropic zero-mean Gaussian 𝒩 (0, 𝟙)). When 𝛽 = 1 we obtain the standard VAE objective [28]. C. Model Descriptions In this section we provide model details for each relation-decoder that we use and the VAE architecture that we employ for each data set. C.1. Relation Decoder implementations TransR: 𝜙𝑟TransR (𝑧𝑖 , 𝑧𝑗 ) = ‖ℎ𝑟 + 𝑟 − 𝑡𝑟 ‖2 with, ℎ𝑟 = 𝑀𝑟 𝑧𝑖 and 𝑡𝑟 = 𝑀𝑟 𝑧𝑗 . + As we want to obtain a [0,1] output, we modify TransR through 𝜙𝑟TransR = 𝜎 (𝑐 − 𝜙𝑟TransR ), + where 𝜎 is a sigmoid function and c is a scalar that ensures that at 𝜙𝑟TransR (𝑧𝑖 , 𝑧𝑗 ) = 0, then + 𝜙𝑟TransR (𝑧𝑖 , 𝑧𝑗 ) ≈ 0. In all experiments we set 𝑐 = 10. NTN (modified version from [32, 33]): 𝜙𝑟 (𝑧0 , … , 𝑧𝑛 ) = 𝜎(𝑢𝑟⊤ [tanh(𝑧 𝑐⊤ 𝑀𝑟 𝑧 𝑐 + 𝑉𝑟 𝑧 𝑐 + 𝑏𝑟 )]) (9) where 𝑢𝑟 ∈ ℝ𝑘 , 𝑀𝑟 ∈ ℝ(𝑛−1)⋅𝑑𝑧 ×(𝑛−1)⋅𝑑𝑧 ×𝑘 , 𝑉𝑟 ∈ ℝ𝑘×(𝑛−1)⋅𝑑𝑧 ) and 𝑏𝑟 ∈ ℝ𝑘 . The only hyperparameter to consider is 𝑘, which controls the NTN’s capacity - in all experiments, we set this to 1. Here 𝑧 𝑐 is a concatenation of the inputs 𝑧0 , … , 𝑧𝑛 , which was introduced in [32, 33]. In contrast the original NTN (see [3]) is only applicable to binary relations and does not include the outer sigmoid. HolE: 𝜙𝑟HolE (𝑧𝑖 , 𝑧𝑗 ) = 𝜎 (𝑟 ⊤ (𝑧𝑖 ⋆ 𝑧𝑗 )) where ⋆ ∶ ℝ𝑑 × ℝ𝑑 → ℝ𝑑 denotes the circular correlation operator and is given by, 𝑑−1 [𝑧𝑖 ⋆ 𝑧𝑗 ]𝑘 = ∑ 𝑧𝑖,𝑚 𝑧𝑗,𝑘+𝑚 mod 𝑑 𝑚=0 NN: a simple four-layer neural-network with hidden layer sizes 𝑙in = 2𝑑𝑧 , 𝑙1 = 2𝑑𝑧 and 𝑙2 = 𝑑𝑧 , with ReLu activations, for latent representations with size 𝑑𝑧 . The final output layer, 𝑙out , is a single value passed through a sigmoid function, to cap the output within [0,1]. C.2. VAE configuration In all representation learning experiments, we use a 𝛽-VAE trained for 300,000 steps, following accepted practice from [20, 22]. The encoder-decoder model parameters are given in Table 1 - we include the model configu- rations used for both MNIST and BlockStacks datasets. C.3. ℒ 𝑗𝑜𝑖𝑛𝑡 configuration In the source domain, we vary 𝛽 values between {1, 4, 8, 12} and fix 𝜆 = 103 . In the target domain, 𝐸𝐿𝐵𝑂 reconstruction term by dividing by a we fix 𝛽 to 10−4 and 𝜆 = 10−2 and normalise the ℒ𝛽-VAE 1 𝑡 , 𝒩 (0, 𝟙)) by a factor , for height 𝐻, width 𝑊 and color channels 𝐶, and normalize ℒ (𝜓𝑒𝑛𝑐 √𝐻 ⋅𝑊⋅𝐶 factor 𝑑1 , for latent representation size 𝑑𝑧 . 𝑧 D. Supplementary Results Figure 5 and Figure 6 provide additional results for Con-I (individual consistency scores for individual relation properties covering transitivity, asymmetry and reflexivity) and Con-A, configured on the same data splits as described in the main text. These results cover variants of Table 1 Specification of our 𝛽-VAE encoder and decoder model parameters, for both 28×28 (top) and 128×128 (bottom) size input data. I: Input channels, O: Output channels, K: Kernel size, S: Stride, P: Padding, A: Activation Encoder Decoder Input: 28 × 28 × 𝑁𝐶 = 1 Input: ℝ10 Layer_ID ; I ; O ; K ; S ; P ; A Layer_ID ; Num Nodes : In - Out ; A Conv2d_1 ; 𝑁𝐶 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z ; 10 - 144 ; ReLU Conv2d_2 ; 32 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z_mu ; 144 - 576 ; ReLU Conv2d_3 ; 32 ; 64 ; 3 × 3 ; 2 ; 1 ; ReLU Layer_ID ; I ; O ; K ; S ; P ; A Conv2d_4 ; 64 ; 64 ; 2 × 2 ; 2 ; 1 ; ReLU UpConv2d_1 ; 64 ; 64 ; 2 × 2 ; 2 ; 1 ; ReLU Layer_ID ; Num Nodes : In - Out ; A UpConv2d_2 ; 64 ; 32 ; 3 × 3 ; 2 ; 1 ; ReLU FC_z ; 576 - 144 ; ReLU UpConv2d_3 ; 32 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z_mu ; 144 - 10 ; None UpConv2d_4 ; 32 ; 𝑁𝐶 ; 4 × 4 ; 2 ; 1 ; Sigmoid FC_z_logvar ; 144 - 10 ; None Encoder Decoder Input: 128 × 128 × 𝑁𝐶 = 3 Input: ℝ10 Layer_ID ; I ; O ; K ; S ; P ; A Layer_ID ; Num Nodes : In - Out ; A Conv2d_1 ; 𝑁𝐶 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z ; 10 - 256 ; ReLU Conv2d_2 ; 32 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z_mu ; 256 - 1024 ; ReLU Conv2d_3 ; 32 ; 64 ; 4 × 4 ; 2 ; 1 ; ReLU Layer_ID ; I ; O ; K ; S ; P ; A Conv2d_4 ; 32 ; 64 ; 4 × 4 ; 2 ; 1 ; ReLU UpConv2d_1 ; 64 ; 64 ; 4 × 4 ; 2 ; 1 ; ReLU Conv2d_5 ; 64 ; 64 ; 4 × 4 ; 2 ; 1 ; ReLU UpConv2d_2 ; 64 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU Layer_ID ; Num Nodes : In - Out ; A UpConv2d_3 ; 32 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z ; 1024 - 256 ; ReLU UpConv2d_4 ; 32 ; 32 ; 4 × 4 ; 2 ; 1 ; ReLU FC_z_mu ; 256 - 10 ; None UpConv2d_5 ; 32 ; 𝑁𝐶 ; 4 × 4 ; 2 ; 1 ; Sigmoid FC_z_logvar ; 256 - 10 ; None ‡ the DC and NN models. DC variants include: DC-Basic, uses the same 𝜙𝑟 as DC, but uses a † similar 𝜙𝑟 to that of [25] but includes the dynamic 𝑢 mask and 𝑏† offset; DC-Gaus, again same ‡ † † 𝜙𝑟 but uses a Gaussian function for 𝜙𝑟 ; DC-Cauchy, uses a Cauchy distribution form for 𝜙𝑟 ‡ and a Cauchy cumulative distribution function for 𝜙𝑟 ; and finally DC-CCS which employs a † modified Cauchy distribution for 𝜙𝑟 , via DC-Cauchy,† 𝜎 (𝜂(2 ⋅ 𝜙𝑟 − 1)) where 𝜎 is the sigmoid function and 𝜂 is a scalar value. This modification enables a cliff-like † shape for 𝜙𝑟 , such that it can output close to 1 for a wider vector difference range. Note all distribution forms are unnormalized so that they cover the interval [0,1]. The NN variants vary layer depth and size, but all use a common input layer of size 𝑙in = 2 ∗ 𝑑𝑧 . NN2 is a three-layer neural network with hidden layer size 𝑑𝑧 and NN3 is a four-layer neural network which is the same as NN, but in contrast has a 𝑑𝑧 pre-final layer size, thereby omitting Figure 5: Consistency values for individual relation properties (Con-I), covering: transitivity, reflexivity and asymmetry. Values are for variants of DC and NN relation-decoder models, referenced to source (MNIST) domain (lower values better). In all plots, darker color shades denote higher values of 𝛽 (in range {1, 4, 8, 12}), corresponding to greater disentanglement pressure from the 𝛽-VAE. Blue, green and red groups show results for data-embeddings, interpolation and extrapolation embeddings respectively (see main text for details on these data splits). Figure 6: Con-A values for variants of DC and NN relation-decoder models, referenced to source (MNIST) domain (lower values better). In all plots, darker color shades denote higher values of 𝛽 (in range {1, 4, 8, 12}), corresponding to greater disentanglement pressure from the 𝛽-VAE. Blue, green and red groups show results for data-embeddings, interpolation and extrapolation embeddings respectively (see main text for details on these data splits). the bottleneck dimension reduction of NN. NN1-shallow includes only one hidden layer, like 𝑑 (𝑑 −1) NN2, of size 𝑧 2𝑧 which enables a pairwise comparison between each input dimension. NN1-sig is the same as NN but employs sigmoid activations, instead of ReLUs. NN-DC is the † again same as the NN from the main text, but includes an additional 𝜙𝑟 -type node that can compute relative differences between inputs in the same way as DC. Figure 7: Analysis of domain-specific information retention by the 𝛽-VAE when using different relation- decoders for ordinality relation decoding. We attempt to predict the overall BlockStacks stack height on the final fixed embeddings obtained after isSuccessor relation-decoder alignment. E. How does each model impact the retention of domain-dependent information Figure 7 shows results for BlockStacks overall block height prediction accuracy when training on fixed encodings of each block stack, after isSuccessor relation-decoder alignment as been applied. Note 𝛽 is fixed in the target domain, so the only moving part are the pretrained models which are trained with varied source 𝛽 values. Note also that dc has an unfair advantage here, as the steered fitting approach allows more flexibility to the VAE learning phase - for this reason the result is only included in the appendix. Since we are interested in capturing general representations that encode both domain-dependent and -independent information, we use 𝑡 obtained from each PRT experiment and produce encodings for the full each target encoder 𝜓𝑒𝑛𝑐 BlockStacks test set. The resulting encodings are then divided into a new train and test subset, used to train both a Sci-Kit Learn Linear regressor and Support Vector Machine regressor with a RBF ∘ kernel [34]. We present the resulting Mean Squared Errors (MSE) in Figure 7, with Ordinary Least Squares (OLS) (a) and Support Vector Regression (SVR) (b). There are a number of noteworthy details: firstly, DC shows no dependence on 𝛽 and leads to a lower MSE across all settings; second, excluding DC, for all models we observe an optimum MSE at 𝛽 = 8, with TransR reaching DC MSE performance for OLS and NN doing the same for SVR. These results indicate that lower MSE can be obtained by using non-linear regression, which indicates that to some degree, the block stack height factor is not encoded linearly, regardless of selected model. Next, by contrasting with Figure 3-bottom, these results suggest that models with higher GC lead to embeddings that are more amenable to domain-specific factor prediction. However, the parabolic trend, where increasing 𝛽 to 12 leads to an increase in error, is in agreement with Figure 2-bottom-right, which showed that most models do not improve at PRT for the largest 𝛽.