Machine Learning in Space: A Review of Machine Learning Algorithms and Hardware for Space Applications⋆ James Murphy1,2[0000−0002−3984−1371] jmurphy@realtra.space, John E Ward1[0000−0003−1973−0794] jward@realtra.space, and Brian Mac Namee2[0000−0003−2518−0274] brian.macnamee@ucd.ie 1 Réaltra Space Systems Engineering, Clonshaugh, Dublin 17, Ireland 2 School of Computer Science, University College of Dublin, Belfield, Dublin 2 Abstract. Modern satellite complexity is increasing, thus requiring be- spoke and expensive on-board solutions to provide a Failure Detection, Isolation and Recovery (FDIR) function. Although FDIR is vital in en- suring the safety, autonomy, and availability of satellite systems in flight, there is a clear need in the space industry for a more adaptable, scalable, and cost-effective solution. This paper explores the current state of the art for machine learning error detection and prognostic algorithms uti- lized by both the space sector and the commercial sector. Although work has previously been done in the commercial sector on error detection and prognostics, most commercial applications are not nearly as limited by the power, mass, and radiation tolerance constraints imposed by opera- tion in a space environment. Therefore, this paper also discusses several Commercial Off-The-Shelf (COTS) multi-core micro-processors—small- footprint boards that will be explored as possible testbeds for future integration into a satellite in-orbit demonstrator. Keywords: Machine Learning · Edge AI · Space. 1 Introduction While traditional Failure Detection, Isolation and Recovery (FDIR) techniques are generally good at detecting single failures, they are limited in isolation ca- pabilities, and struggle when multiple faults combine in unforeseen ways. Ad- ditionally, these systems offer limited capabilities for prognosis of future issues, reducing the opportunities to catch and correct potentially catastrophic prob- lems. Most FDIR functions introduce automatic actions that are customized, bespoke, and complex. However, with the advance of space-based, low-power, high-performance computing systems, more advanced FDIR functionality can be developed and deployed to greatly enhance the autonomous reaction of the ⋆ This work was supported by Réaltra Space Systems Engineering. This work was also supported by the Irish Research Council under the Employment-based postgraduate Scheme (IRC-EBPPG/2020/11). Copyright 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0) 2 J. Murphy et al. spacecraft to immediate and foreseen failure modes. Specifically, the use of on- board machine learning algorithms that actively learn from in-flight data to diagnose and react rapidly to these current and future failures will minimize performance loss and thus provide an invaluable ability for the optimal perfor- mance of space-based assets. One of the growing research topics in all major space agencies is the ap- plication of machine learning to both downstream (e.g., data analytics of Earth Observation data [17]) and upstream (e.g., applying machine learning techniques in spacecraft on-board systems [1]) data. When developing machine learning so- lutions for upstream tasks, due to the specific requirements for space hardware, the footprint of electronic devices carried must be as small as possible to reduce mass and volume for storage. Furthermore, due to the restricted power budgets of space missions, devices must also be low powered. Current FDIR space systems are considered crude but effective [20], and prognosis is virtually non-existent in space applications of machine learning. The ability to diagnose a potential issue before it becomes a problem is much desired in space. This has the potential of lengthening electronic component lifetimes in space systems and to assist any potential damage circumvention and recovery. Commercial companies such as Deutsche Bahn [12] and Airbus [16] are currently researching prognostics for future applications in non-space domains. Most FDIR systems have physical circuit monitors such as latch-up pro- tection or voltage/current monitoring systems. These add heavy and expensive components to a board to give the ability to recover. In general, stringent re- quirements on the continued functionality of components are imposed such that components must be pre-screened to ensure increased chances of survival in long-term missions. Additionally, redundancy is usually included in many de- signs adding to the complexity and cost of the system. These factors result in an increase in cost that tend to compound as a system becomes larger. Adding a system that can compensate for unexpected inputs may reduce potential fail points, thereby reducing overall costs. Research into anomaly detection has also been conducted around time-series data with regards to live data streaming. The scenario in space is even more chal- lenging than in terrestrial applications due to the extremely harsh environment. The requirement on boards to survive the massive vibrations of a rocket launch to the extreme radiation and thermal environment of space, requires hardware to be robust and tested to survive in these environments. This is one of the largest factors contributing to the cost of these products. Creating a system that re- duces the need for these intensive tests is the next step of space-rated computer systems. This is where an opportunity exists to utilize ML techniques to reduce the reliance on testing. This paper reviews the state of the art in applying machine learning methods in space applications and describes and compares the leading currently available COTS boards for space-based machine learning. Section 2 describes applications of machine learning in the space domain, with a particular focus on FDIR. Section 3 covers the current applications of edge machine learning for commercial Machine Learning in Space 3 applications. Deutsche Bahn and Airbus are used as examples here. Section 4 summarises the potential selection of hardware and compares their abilities and power consumption. Section 5 contains the conclusion for this paper. 2 Machine Learning in Space In the space domain, the use of machine learning techniques is already being explored for Earth observation applications [17], astronomy [5], sensor fusion for navigation [6], and satellite operations [3]. The availability of open-source software tools and low-cost cloud-based computing hardware, through services such as Google Colab3 , has allowed for rapid development of these examples. It is believed that machine learning techniques can also benefit future space trans- portation systems, in applications such as avionics and system health monitoring [2]. This can also lead to the development of inexpensive electronic systems for space-based operations [2]. Machine learning applications for space can be broken down into two cate- gories: space-based applications and ground-based applications. These have vastly different requirements when it comes to the size, weight, and power constraints of the hardware used. For example, a board in space must deal with harsh en- vironments with regards to temperature and radiation. This puts limitations on the board such as part density and cooling systems, which in turn, limits the capabilities of any deployed algorithm. A ground-based system may not be as useful as an in-orbit system due to the substantially smaller amount of data the system will receive due to mission link budgets, however, it will operate without the physical limitations imposed in-orbit. This section explores two examples of each system. The machine learning techniques used within each example will also be described. 2.1 Space-based Applications This section introduces two representative examples of space based applications of machine learning, one based on image analysis and a second based on anomaly detection within a sensor stream. Image Analysis - Earth Observation Due to the high-powered nature of ma- chine learning, applications in space have been limited. Phi-Sat-1 is the European Space Agency’s (ESA) first attempt at putting an Edge AI board in space. It launched successfully on September 3rd 2020 on-board a European Vega rocket [1]. This is the first in-orbit demonstration of an Edge AI board. Phi-Sat-1 is a cube sat focused on Earth observation and on-board image analysis. Its primary payload was a hyperspectral imager and the Edge AI board. It is operated by ESA’s Phi Lab which focuses on machine learning applications in space. 3 https://colab.research.google.com 4 J. Murphy et al. Fig. 1. The structure of a convolutional neural network (CNN) (reproduced from [19]) The revolutionary idea of Phi-Sat-1 was that if an Edge AI board could be put on-board a satellite with an image analysis algorithm deployed onto it to detect cloudy images, then this could prevent the cloudy images from being downloaded, reducing the link budget and saving precious bandwidth for the mission. To accomplish this, ESA chose the Intel Movidius Myriad 2 chip [10] as their hardware accelerator due to the low mass and power requirements. Ubotica was contracted to develop the algorithm and to qualify the chipset for space- based operations. This led to an intensive qualification campaign of the Myriad 2 chip on the Ubotica UB0100 cubesat board. The result would be the first Edge AI board qualified for in-orbit operations. At the time of writing this paper, initial results from the Phi-Sat-1 mission are promising and ESA has renewed a contract with Ubotica to develop Phi-Sat-2. Conducting the image analysis on-board has saved up to 90% of the bandwidth for a similar outcome when compared to a ground-based analysis [1]. The method used by Phi-Sat-1 to detect clouds is a convolutional neural net- work (CNN) [13]. A CNN is an artificial neural network designed to recognize patterns efficiently and accurately within structured arrays of data such as im- ages (an illustration of the architecture of a CNN is shown in Fig. 1). CNNs have become the standard approach for computer vision problems [1]. This makes it ideal for use in Earth observation scenarios such as the one used by Phi-Sat-1. These models tend to be quite large due to the size of the images being ana- lyzed, especially in Earth observation where there are TBs of raw image data per orbit. The success of this method in Phi-Sat-1 has proven the usability and survivability of powerful Edge AI boards in an in-orbit environment. This was accomplished due, in part, to the relatively high level of computational power available on-board Phi-Sat-1 thanks to the Myriad 2 chipset, allowing a CNN to operate. Anomaly Detection - ESA’s Future Launcher Preparatory Program ESA’s Future Launcher Preparatory Program (FLPP) program [2] is currently investigating commercial off the shelf (COTS) avionics solutions for launchers employing machine learning techniques. The primary goal of this is to detect anomalies during flight and potentially rectify these issues by creating a gen- eralized building block to protect avionics from the environment experienced by a launcher. The study was to identify the most promising boards and algo- rithms for time-series datasets in a launcher environment. This also took into Machine Learning in Space 5 Fig. 2. The architecture of a simple autoencoder network (reproduced from [13]) account certain limitations on a potential system due to the harsh environment of a launcher. The benefits and risks associated with different machine learning method and board combinations were also explored in this work. This resulted in the development and prototyping of several proofs of concept Given that house- keeping data was to be monitored, a time-series audio dataset was created [2]. The most promising result found was a long short-term memory (LSTM) based autoencoder. An autoencoder [13] is a type of artificial neural network used to learn efficient data encodings for unsupervised data (the architecture of a simple autoencoder is shown in Fig. 2). The aim of an autoencoder is to learn a representation (encoding) for a set of data reducing the memory requirements. The key to autoencoders is not only that there is a encoding, but also a reconstructing (or decoding) side, where the autoencoder tries to re-generate the data from the reduced encoding as closely as possible to its original input. Autoencoders are often trained with only a single layer encoder and a single layer decoder, but using deeper multi-layer encoders and decoders offers advantages. To handle temporal data present in space applications focused on FDIR, values can be presented to an LSTM sequentially. 2.2 Ground-based Applications This section describes a ground based application of machine learning in the space domain based on anomaly detection. Anomaly Detection - Downstream Anomaly Detection Hundman et. al. [3] explore the possibility of replacing the satellite operator with an auto- mated solution based on machine learning. This work addressed an important 6 J. Murphy et al. and growing challenge within the satellite telemetry sector. LSTM Autoencoders were found to be the most applicable method for detecting spacecraft telemetry anomalies while addressing key challenges around interpretability and complex- ity. This work has been deployed on the Soil Moisture Active Passive (SMAP) satellite ground segment where over 700 channels are monitored in real time. There have been several correctly identified anomalies detected thus far. How- ever, there have also been multiple false positives, showing the need for further refinements in the model [3]. Fig. 3. The architecture of a neural networks based on LSTM layers. A Long Short-Term Memory (LSTM) [18] network is a type of recurrent neural network (RNN) (the architecture of an LSTM is shown in Fig. 3). LSTMs have feedback connections (unlike regular RNNs) and preserve errors that can be backpropagated. This allows LSTMs to continue to learn for many steps. They can process single points and sequences of data, composed of a cell, an input, an output and a forget gate. LSTMs contain information outside the normal operations of a recurrent neural network in a gated cell. This allows cells to be treated like computer memory through reading, writing and storage. This makes LSTMs suited for working with time-series data. 3 Machine Learning for FDIR in Other Domains This section surveys interesting examples of machine learning-based FDIR solu- tions in non-space domains. We describe examples based on anomaly detection and prognostics and prediction. 3.1 Anomaly Detection Airbus is attempting to reduce operational and maintenance costs by imple- menting an in-service failure detection system with the goal of becoming a fully Machine Learning in Space 7 prognostic system [16]. To this end, Airbus has developed a machine learning based diagnostics and prognostics (DnP) framework to accomplish this, and have begun deploying prototypes of this framework on non-critical flight systems [16]. Their expectation is that this will increase operational reliability, drive down maintenance, costs and increase safety if successful. Currently, Airbus use a pre- dictive maintenance schedule, changing out parts that are still functional after a certain amount of time and/or cycles, reducing the potential lifetime of com- ponents. Modern advanced aircraft systems allow more data gathering, enabling data acquisition on a scale required for machine learning. This is the primary issue with developing machine learning algorithms for aircraft at the moment as there is a lack of adequate and appropriate in-service failure data. Airbus is also attempting to implement a smart approach to anomaly handling on aircraft due to a lack of advanced warning of failure events and fault isolation. Predictive maintenance provides an integrated solution, but this is expected to change as machine learning becomes more prevalent in aerospace. Deutsche Bahn has been attempting to reduce train delays through the use of machine learning. Managing a rail network is a complicated matter, leading to operators both making mistakes or missing potential anomalies. Deutsche Bahn has developed their own time-series dataset covering their rail network. They can use this dataset to detect anomalies and account for potential delays before they happen. Implementing a machine learning algorithm has the potential to catch these issues and even detect anomalies not visible to the naked eye. Deutsche Bahn has been developing an online train delay prediction tool, trained using real data from their rail network [12]. The goal of this system is to identify future delays as early as possible. This may lead to future work of developing a prognostics system, enabling further predictions. This would enable operators to avoid problems in advance, reducing the rail delays across their network. This has been deployed in early stages and is showing promising results in reducing delays. This has enabled Deutsche Bahn to implement a prototype semi-autonomous FDIR system. 3.2 Prognostics & Prediction With the advent of low cost, high volume Internet of Things (IoT) devices, automated machine health monitoring has become more practical [14]. Machine health monitoring systems can include volt/ammeters, microphones and camera systems. IoT developments has made creating a network from these systems much easier, enabling large dataset generation for machine learning algorithms. Data-driven machine health monitoring systems provide a new insights into new methods of fault detection and recovery for large systems. It also shows promise for predicting potential failures in these systems, increasing their lifespan. In space systems, mission lifetimes could be lengthened if potential anoma- lies could be detected early. Significant work in prediction models to date has been done on the stock market and stock market prediction [15]. Stock brokers normally use two types of measures to predict price fluctuations: fundamental and technical. Fundamental measures are based on the intrinsic price of a stock 8 J. Murphy et al. along with the state of the economy and political environment. Technical anal- ysis is based on statistical methods such as market values and past volumes. The data-driven nature of the technical analysis has enabled the deployment of machine learning algorithms to improve prediction accuracy. Statistical meth- ods have been deployed to track the movement of stock price, generating the datasets required for machine learning algorithms to predict future prices [15]. Sufficient data pre-processing can result in accurate stock price prediction and is therefore the current state of the art in ML applications for prognostics and predictions. 4 Edge AI Hardware This section reviews the Edge AI boards most applicable to space-based systems. Edge AI boards allow machine learning algorithms to be run in constrained areas. They are low power, but high performance devices, making them ideal for a space mission. Power draw is considered the most important factor due to limitations of power generation capabilities on board satellite subsystems. The boards investigated in this paper have a broad selection of power draws and computational power (measured in trillions of operations per second (TOPS)), allowing a wide range of potential results when used with deployed machine learning algorithms. Hardware also have differing methods of circuitry such as FPGA and ASIC, allowing the utilization of differing methods to deal with radiation environments. The options for a small footprint board are currently limited for terrestrial applications due to the required processing power to perform machine learning algorithms. The number of options for radiation-hardened, space-grade boards are even fewer as most space quality hardware are several years behind the terrestrial market. The following boards are interesting from the point of view of potentially running space-based applications of machine learning: – Nvidia Jetson Xavier NX The Nvidia Jetson Xavier NX (shown in Fig. 4(a)) is a high-power small footprint Edge AI board using the Nvidia 12nm architecture. It is capable of up to 32 TOPS of computing power and draws a minimum of 10W of power. The Jetson Xavier also uses Nvidia’s software development suite JetPack 4 allowing cross compatibility between the entire Jetson family of boards [7]. The Xavier NX model is used for intensive op- erations with high TOPS rate requirements. This gives a good baseline for more powerful non-edge boards. – Huawei Atlas 200 The Huawei Atlas 200 (shown in Fig. 4(b)) is one of the closest competitors to the Nvidia Jetson Xavier in terms of Edge AI computing. The Ascend 310 chip on the Atlas board is designed for image processing and other machine learning applications. This gives the Atlas 200 up to 22 TOPS of machine learning power at a maximum of 20W [8]. The Atlas is comparatively expensive and low powered, but it is nevertheless a good comparison to the Jetson Xavier. 4 https://developer.nvidia.com/embedded/jetpack Machine Learning in Space 9 (a) (b) (c) (d) Fig. 4. Edge AI hardware (a) Nvidia Jetson Xavier NX. (Left: Heat-sink cooler, Right: Bare board), (b) Huawei Atlas 200, (c) Google Coral Development Board, and (d) Intel Neural Compute Stick. – Google Coral The Google Coral (shown in Fig. 4(c)) is powered by a quad Cortex-A53 processor and uses a Google Edge TPU as a co-processor to provide 4 TOPS at only 2W. The Google Coral is tied as the most efficient board surveyed in this paper at 2 TOPS/Watt [9]. The Intel Myriad X also supplies this efficiency. The Google Coral also has a larger development board model and small USB style accelerator. The large development board assists software development and debugging before being deployed on the accelerator unit. – Intel Movidius Myriad 2/X The Intel Neural Compute Stick (shown in Fig. 4(d)) is powered by an Intel Movidius Myriad 2 chipset. The Myriad 2 supplies the board with 1 TOPS at 1W [10]. The NCS also utilizes Intel’s OpenVINO software suite to accelerate machine learning algorithms for use on these boards. Intel has already released the Myriad X powered Neural Compute Stick 2 which gives 2 TOPS at 1W, making it a much more powerful board [11]. However, the Myriad 2 chip is the only chip in this list that also has space heritage and has been qualified for the space environment. The Myriad 2 VPU was integrated into the Phi-Sat-1 mission [1] as its primary inference device for image analysis. The Myriad 2 was also the first Edge AI board to fly on a space mission. To compare the set of boards listed above, we focus on Power and TOPS as space-based applications have a hard limit on power inputs. However, price in USD is also used in this analysis. Table 1 lists the specifications of each of the boards investigated in this paper. Fig. 5 shows a scatter plot illustrating 10 J. Murphy et al. Table 1. A comparison of a set of AI boards based on power, price, TOPS, and whether or not they have previously been used in space applications (space heritage). Note ”Pending” means that a mission is currently planned but has not yet been launched Board Power (W) TOPS Price (USD) Mass (g) Op Temp (◦ C) Space Heritage Xavier 15 35 400 280 -25 +90 Pending Atlas 20 22 950 320 -25 +80 No Coral 2 4 100 20 -40 +85 Pending Myriad X 2 4 80 80 -40 +105 No Myriad 2 2 2 60 80 -40 +105 Yes Fig. 5. Board Comparison Plot the performance of the different boards surveyed based on TOPS/Watt and USD/TOPS. This gives an overview of the wide array of options available in the commercial market and a suggestion of which board offers the best value per Watt and USD. In terms of TOPS/Watt and USD/TOPS, the boards are quite similar. This means that the potential applications of the board will be the determining factor of which board could be used. Small satellite applications have specific requirement on wattage, reducing this list to those boards who’s total draw is less than 5 Watts. However, if the power budget exists, the Jetson Xavier NX is overall the most power efficient board. Fig. 5 shows how each board performs as a function of TOPS/Watt, however, the total power draw must also be considered for these boards as it is a primary limitation of any space mission. Space missions vary depending on the mission requirements, deep space missions tend to be larger and have larger power bud- gets. Missions like these could warrant the use of higher powered boards such as the Jetson Xavier NX or the Huawei Atlas. Smaller missions for low earth observation (LEO) tend to be smaller satellites and therefore have smaller power budgets. In this case the Myriad X, Myriad 2, or Google Coral would be war- ranted. Due to the Myriad 2 already being space proven and qualified, it would be the natural choice for a mission like this as it has also been deployed in a cubesat PCB104 standard successfully. Machine Learning in Space 11 5 Conclusion The current maturity of machine learning research and applications makes it ideal for application in space-based systems. There have been a number of ap- plications of machine learning techniques to commercial or ground-based space systems, and there are a growing number of applications in space-based systems. Promising applications of machine learning techniques in these applications in- clude image processing and anomaly detection. In particular the use of machine learning based anomaly detection techniques for FDIR in satellites is a signifi- cant opportunity that has not yet been fully grasped. This will require the de- velopment of bespoke algorithms designed to meet the demands of space-based systems, as well as the use of specific hardware platforms for computation. There are many opportunities for the space sector to take COTS modules from the commercial sector for use in space flight. Work has already been done on several systems to qualify them for either aeronautical or space environments. The variance in the computing performance and the power consumption between these boards also allows for a wide range of applications. Low power consump- tion boards are generally suited for missions with low power budgets, but still have enough computing performance to deploy most machine learning methods. Higher powered boards are less suited for small missions such as CubeSats due to their large power consumption. They are also more susceptible to radiation due to their generally higher density of components, which reduces their appli- cability to deep space missions. However, the Google Coral, for example, uses an ASIC which has been proven to be more radiation resistant [21]. Less powerful boards also tend to use FPGAs which are the most resistant to radiation. Due to the multitude of applications of machine learning in the space sector, there are also many different machine learning methods that may be used. In summary, there are a wide range of platforms available to the space sector that can be either used directly or modified for use in-orbit or for ground segment missions. However, the mission requirements will be the deciding factor on which board and which machine learning method should be used. References 1. Furano, G., Meoni, G., Dunne, A., Moloney, D., Ferlet-Cavrois, V., Tavoularis, A., ... & Fanucci, L. (2020). Towards the Use of Artificial Intelligence on the Edge in Space Systems: Challenges and Opportunities. IEEE Aerospace and Electronic Systems Magazine, 35(12), 44-56. 2. FLPP preparing for Europe’s next-generation launcher, Publication available: https://www.esa.int/Enabling Support/Space Transportation/New Technologies/FLPP preparing for Europe s next-generation launcher (Accessed 27-May-2020) 3. Hundman, K., Constantinou, V., Laporte, C., Colwell, I., & Soderstrom, T. (2018, July). Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 387-395). 12 J. Murphy et al. 4. Barry, B., Brick, C., Connor, F., Donohoe, D., Moloney, D., Richmond, R., ... & Toma, V. (2015). Always-on vision processing unit for mobile applications. IEEE Micro, 35(2), 56-66. 5. Vojtekova, A., Lieu, M., Valtchanov, I., Altieri, B., Old, L., Chen, Q., & Hroch, F. (2021). Learning to denoise astronomical images with U-nets. Monthly Notices of the Royal Astronomical Society, 503(3), 3204-3215. 6. Bagnell, J. A., Bradley, D., Silver, D., Sofman, B., & Stentz, A. (2010). Learning for autonomous navigation. IEEE Robotics & Automation Magazine, 17(2), 74-84. 7. Nvidia Jetson Xavier NX Data-sheet. Available at: https://developer.nvidia.com/embedded/jetson-xavier-nx (Accessed: 18-Feb-2021). 8. Huawei Atlas 200 Data-sheet. Available at: https://e.huawei.com/en/products/cloud- computing-dc/atlas/atlas-200 (Accessed: 18-Feb-2021). 9. Google Coral DK Data-sheet. Available at: https://coral.ai/docs/dev- board/datasheet/ (Accessed: 18-Feb-2021). 10. Intel Movidius Myriad 2 Data-sheet. Available at: https://www.intel.com/content/dam/www/public/us/en/documents/product- briefs/myriad-x-product-brief.pdf (Accessed: 18-Feb-2021). 11. Intel Movidius Myriad X Data-sheet. Available at: https://ark.intel.com/content/www/us/en/ark/products/122461/intel-movidius- myriad-2-vision-processing-unit-4gb.html (Accessed: 18-Feb-2021). 12. Hauck, F., & Kliewer, N. (2020). Data Analytics in Railway Operations: Using Machine Learning to Predict Train Delays. In Operations Research Proceedings 2019 (pp. 741-747). Springer, Cham. 13. Kelleher, J. D., Mac Namee, B., & D’arcy, A. (2020). Fundamentals of machine learning for predictive data analytics: algorithms, worked examples, and case stud- ies. MIT press. 14. Zhao, R., Yan, R., Chen, Z., Mao, K., Wang, P., & Gao, R. X. (2019). Deep learning and its applications to machine health monitoring. Mechanical Systems and Signal Processing, 115, 213-237. 15. Patel, J., Shah, S., Thakkar, P., & Kotecha, K. (2015). Predicting stock and stock price index movement using trend deterministic data preparation and machine learn- ing techniques. Expert systems with applications, 42(1), 259-268. 16. Adhikari, P. P., Rao, H. V. G., & Buderath, M. (2018, October). Machine learning based data driven diagnostics & prognostics framework for aircraft predictive main- tenance. In 10th International Symposium on NDT in Aerospace Dresden, Germany. 17. Lary, D. J., Zewdie, G. K., Liu, X., Wu, D., Levetin, E., Allee, R. J., ... & Aurin, D. (2018). Machine learning applications for earth observation. Earth observation open science and innovation, 165. 18. S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. 19. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C. (2018). Mo- bilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520). 20. Zolghadri, A. (2012). Advanced model-based FDIR techniques for aerospace sys- tems: Today challenges and opportunities. Progress in Aerospace Sciences, 53, 18-29. 21. Lacoe, R. C., Osborn, J. V., Koga, R., Brown, S., Mayer, D. C. (2000). Application of hardness-by-design methodology to radiation-tolerant ASIC technologies. IEEE Transactions on Nuclear Science, 47(6), 2334-2341.