8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) Comparison of Code Smells in iOS and Android Applications Kristiina Rahkemaa , Dietmar Pfahla a Institute of Computer Science, University of Tartu, Tartu, Estonia Abstract Code smells are patterns indicating bad practices that may lead to maintainability problems. For mobile applications most of the research has been done on Android applications with very little research on iOS applications. Our goal is to compare the variety, density, and distribution of code smells in iOS and Android applications. We analysed 273 open source iOS and 694 open source Android applications. We used PAPRIKA and GraphifySwift to find 19 object oriented code smells. We discovered that the distributions and proportions of code smells in iOS and Android applications differ. More specifically, we found: a) with the exception of one code smell (DistortedHierarchy) all code smells that could be observed in Android apps also occurred in iOS apps; b) the overall density of code smells is higher on iOS than on Android with LazyClass and DataClass particularly sticking out; c) with regards to frequency, code smells are more evenly distributed on iOS than on Android, and the distributions of code smell occurrences on class level are more different between the platforms than on app level. Keywords Mobile applications, Android, iOS, Code smells 1. Introduction proportions of code smells on these platforms. They analysed iOS apps for four object oriented, Code smells are patterns indicating bad practices three iOS specific and Android apps for four ob- that often lead to maintainability problems [1]. ject oriented and two Android specific code smells. Code smells have been studied extensively for They discovered that code smell proportions were desktop applications (shortened to ”apps” in the higher in Android apps. following). For mobile apps most of the analysis Our goal is to compare the variety, density and has been done on the Android platform. distribution of code smells in iOS and Android Mannan et al. [2] analyzed 21 object oriented apps. First we will check if variety, density and dis- code smells in open source Android apps. They tribution of code smells differ in iOS and Android compared code smell occurrences on Android and apps to see if the results are similar to differences Java desktop apps looking at differences in variety, found between Android and desktop Java apps by density and distribution of code smells. They dis- Mannan et al. [2]. Second we extend the analysis covered that the variety of code smells is the same, done by Habchi et al. [3] by comparing the den- but density and distribution of code smells in desk- sities and distributions of more code smells in iOS top Java and Android apps differ. They mention and Android apps, to see if Android apps are in that other mobile platforms should have the same general more prone to code smells and if different variety of code smells but do not discuss possible platforms are more prone to different code smells. differences in density or distribution. In this study we aim to answer the following re- Habchi et al. [3] used the tool PAPRIKA [4] search questions: to analyse iOS and Android apps and compared RQ 1: Are all types of object-oriented code smells present in both iOS and Android apps? QuASoQ 2020: 8th International Workshop on Quantitative To answer this and the following research ques- Approaches to Software Quality, December 1st, 2020, Singapore tions, we used the tool GraphifySwift1 [5] to anal- " kristiina.rahkema@ut.ee (K. Rahkema); yse iOS Apps and the tool PAPRIKA [4] to anal- dietmar.pfahl@ut.ee (D. Pfahl) yse Android apps. To make the code smell defi-  0000-0003-2400-501X (D. Pfahl) © 2020 Copyright for this paper by its authors. Use permitted under nitions (and calculations) comparable across plat- Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR http://ceur-ws.org CEUR Workshop Proceedings (CEUR- 1 https://github.com/kristiinara/GraphifySwift WS.org) Workshop ISSN 1613-0073 Proceedings 79 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) forms, we adapted the code smell queries defined ties: the proportions of code smells differ between by Rahkema et al. [5] when searching for code platforms. In addition, we saw that code smells are smells in Android apps. In total, we identified 19 more evenly distributed in iOS apps as compared code smell types that could potentially occur in to Android apps. apps on both platforms. We took under consider- Then we analyzed how large the share of smelly ation that the variety of code smells depends on apps on each platform is and how large the share the programming language used. For example, the of smelly classes is on each platform. We did these code smell RefusedParentBequest is not applica- analyses for each code smell type separately. It ble to Swift because Swift lacks the 𝑝𝑟𝑜𝑡𝑒𝑐𝑡𝑒𝑑 key- turned out that the percentages of smelly apps word. Therefore, we did not include it in our anal- are relatively similar between platforms. Only the yses. code smell DataClass is much more prominent in Our analysis showed that 18 of the 19 identified iOS apps than in Android apps. code smells occurred in apps on both platforms, In addition, we found that the distributions of i.e., Android and iOS. Code smell DistortedHier- code smell occurrences on class level are more dif- archy never occurred in iOS apps. ferent between the platforms than on app level. To better understand whether the frequency of This result might, again, be explained by the fact occurrence is similar, we formulated our second re- that Android apps usually have larger classes and, search question. thus, tend to have more of the code smells that cor- RQ 2: Do code smells occur with the same respond to more complex classes whereas iOS apps density in iOS and Android apps? tend to have more compact classes and, thus, tend To answer this question, we calculated the over- to have more of the code smells that correspond to all density of all code smells and the densities of more simple classes. This effect is more prominent each of the 19 code smells over all apps on both iOS when doing the analysis on class level than on app and Android. It turned out that, contrary to what level. Habchi et al. [3] expected, the overall density of code smells is higher in iOS apps than in Android apps. Code smells LazyClass, DivergentChange, 2. Related Work PrimitiveObsession and DataClass had a particu- Code smells in desktop applications: Fowler larly high density in iOS apps. On the other hand, [1] defined 22 object oriented code smells and pro- code smells LongMethod, LongParameterList and vided refactorings for these code smells. Khomh ShotgunSurgery were clearly more frequent in An- et al. [6] studied the impact of code smells. They droid apps. In addition, we found that the code found that code smells affect classes negatively smell densities per code smell type were some- and that classes with more code smells were more times higher and sometimes smaller in iOS apps prone to changes [6]. Olbrich et al. [7] studied the as compared to Android apps. This might be ex- evolution and impact of code smells based on two plained by the fact that Android apps tend to have open source systems. Their findings confirmed more of the code smells that correspond to more that code smells affect the way how code changes complex classes whereas iOS apps tend to have in a negative way. They were also able to iden- more of the code smells that correspond to more tify different phases of evolution in code smells [7]. simple classes. Linares et al. [8] made a large scale analysis of To better understand the distributions of code Java Mobile apps and discovered that anti-patterns smells in apps on the two platforms iOS and An- negatively impact software quality metrics such as droid, we formulated our third research question. fault-proneness [8]. RQ 3: Do code smell distributions differ be- Tufano et al. [9] studied the change history tween iOS and Android apps? of 200 open source projects and found that most To answer this question we first compared the code smells are introduced when the correspond- proportions of code smell occurrences across all ing code is created and not when it is changed. iOS and Android apps. The results confirmed what They also found that when code does become we had seen when we compared code smell densi- 80 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) smelly through evolution then it can be character- code smells studied was limited due to the num- ized by specific code metrics. Contrary to common ber of code smells PAPRIKA is able to detect and belief [10] they discovered that most code smells ranged from three to four object oriented code are not introduced by newcomers, but by develop- smells and four to six Android specific code smells ers with high work loads and high release pressure [15][13][14]. Mannan et al. [2] decided to broaden [9]. this scope and studied 21 object oriented code Code smells in Android applications: Dif- smells using the commercial tool InFusion. They ferent kinds of code smells have been researched analyzed open source Android and Java desktop for Android, such as object-oriented, Android- apps for these 21 code smells and compared their specific, security-related and energy-related code occurrences. Mannan et al. detected that the va- smells. Gottschalk et al. proposed an approach riety of code smells was the same and most code to detect energy related code smells on mobile smells occur in both systems in a similar frequency apps and validated this approach on Android and with major differences only for a couple of code showed that it is possible to reduce energy con- smells. They concluded that studying code smells sumption by refactoring the code [11]. Ghafari et on mobile platforms can be done with tools meant al. [12] studied security-related code smells and for desktop apps. They also found that the code discovered that most apps contain at least some smells that have been researched so far are not security-related code smells. the same ones that occur most and that the focus Hecht [4] proposed an approach to detect code should change to code smells that are more rele- smells and anti-patterns on Android systems and vant [2]. They suggest that other mobile platforms implemented this approach in a tool called PA- will have the same code smells, but do not give PRIKA. This tool analyses the Android APK, cre- any suggestions towards the possible differences ates a model of the code and inserts this model into in density or distribution. They analysed 500 An- the neo4j database. Code smells are then defined as droid and 750 Java desktop apps randomly selected database queries which makes it possible to query from GitHub. Unfortunately, the tool Infusion used code smells on a large number of apps at the same by Mannan et al. does not seem to be available time. He analysed 15 popular apps for the occur- anymore. Therefore a direct comparison using In- rences of four object oriented and three Android fusion for code smell analysis on iOS is no longer code smells. Hecht et al. [13] tracked, the soft- possible. ware quality of 106 popular Android apps down- Code smells in iOS applications: Habchi et loaded from the Google Play Store along their evo- al. [3] used PAPRIKA to detect code smells in lution. They calculated software quality scores for iOS apps. They used ANTLR4 grammars to gen- different versions of these apps and tracked their erate parsers for Swift and Objective-C code. They evolution. There were different evolution graphs, created the apps graphs that could then be used such as constant decline, constant rise, stability or by PAPRIKA. They analysed 176 Swift and 103 sudden change in either direction depending on Objective-C apps from a collaborative list of open the programming practices of the team [13]. This source iOS apps. In their study they analysed four shows that code quality is not necessary linked object oriented, three iOS specific and two Android to app size, but the programming practices of the specific code smells. They compared smell propor- developers. Mateus et al. [14] used PAPRIKA to tions in iOS and Android apps and discovered that analyze Android apps written in Java and kotlin. the proportions of code smells were higher in An- They compared code smell occurrences in both droid apps. On the other hand proportions of code languages and concluded that apps that were ini- smells in Objective-C and Swift were similar [3]. tially written in Java and later introduced kotlin Rahkema et al. [5] introduced a tool called were of better quality than other Android apps GraphifySwift that analyses Swift code and detects [14]. They analysed a set of 2167 open source An- 34 object oriented code smells. Similarly to PA- droid apps combining different databases of open PRIKA, GraphifySwift enters data about the anal- source Android apps. ysed app into the neo4j database. The database In these papers using PAPRIKA the number of structures used by PAPRIKA and GraphifySwift 81 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) are similar, but slightly different. In their analy- For Android apps we used PAPRIKA to populate sis they used the same collaborative list of open the neo4j database. We then took the queries de- source iOS apps but did not compare the results to fined by Rahkema et al. for GraphifySwift to find Android. code smells. Since GraphifySwift was originally In the following, we extend the research in developed to analyse iOS apps we had to adapt the [2, 3, 5]. We adapted the queries defined in [5], code smell queries so that they could be used on where possible, so that they could be applied to the database produced by PAPRIKA. We made the a database populated by PAPRIKA. We used PA- following changes to the code smell queries: PRIKA to analyse Android apps and GraphifySwift We removed references to Module nodes, i.e., to analyse Swift apps. Then we compared the two the relationship platforms with regards to variety, density, and dis- (app)-APP_OWNS_MODULE->(module)- tribution of 19 code smells. MODULE_OWNS_CLASS->(class) was substituted by the relationship 3. Methods (app)-APP_OWNS_CLASS->(class) In Section 3.1, we present the tools used for code smell analysis. In Section 3.2, we cover the choice We removed references to argument type or of apps and in Section 3.3 we describe the analysis substituted them with argument name. Argu- performed. ment names are not accessible in Java bytecode and therefore the argument name provided by PA- PRIKA is actually the argument type. 3.1. Code Smell Analysis Finally, we added the relationship In previous research a tool called PAPRIKA has (variable|argument)-IS_OF_TYPE been used to find code smells in Android appli- ->(class) cations [4, 15, 13, 3, 14]. PAPRIKA analyses the Android APK, enters data about the applications by finding classes whose name matched the argu- into a neo4j database and defines queries for each ment name or variable type. code smell. For analysing iOS applications Habchi After these modifications of the database and et al. [3] used PAPRIKA to query code smells, but queries, 19 of the 34 GraphifySwift code smell populated the neo4j database using ANTLR gram- queries could be used on the Android app database mars. Rahkema et al. [5] introduced a new tool produced by PAPRIKA. called GraphifySwift that extends the functional- The code smell queries that had to be excluded ity of PAPRIKA. It analyses Swift code, enters data contained metrics or attributes that were not pro- about the iOS applications into a neo4j database vided by PAPRIKA. We excluded for example and defines database queries to find code smells. queries referring to code duplication, maximum PAPRIKA is able to find four object oriented code nesting depth, number of switch statements and smells. Since the queries for these four code smells number of comments. are implemented identically in GraphifySwift, it For the analysis of Android apps we calculated produces the same results as PAPRIKA for them. new thresholds based on the apps that we anal- In GraphifySwift additional code smell queries are ysed. The list of iOS and Android thresholds is in- defined. Overall, GraphySwift is able to find 34 ob- cluded in the thresholds table2 . ject oriented code smells. For the analysis of iOS apps we used the tool 3.2. Choice of Applications GraphifySwift. We used the same thresholds as in Rahkema et al. [5]. Note that we focused on For analysis of iOS apps we used the same collab- Swift code as Swift has replaced Objective-C and orative list of open source iOS apps as was used by not many differences between the two languages 2 https://figshare.com/articles/conference_contribution/ are to be expected according to Habtchi et al. [3]. Thresholds_for_iOS_and_Android_code_smell_analysis/ 13102991 82 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) Rahkema et al. [5] and whose older version was DistortedHierarchy, DivergentChange, Inappro- used by Habchi et al. [3]. The final set of success- priateIntimacy, LazyClass, LongMethod, LongPa- fully analysed apps was the same as in [5] and in- rameterList, MiddleMan, ParallelInheritanceHier- cluded 273 open source iOS apps. archies, PrimitiveObsession, SAPBreaker, Shot- For analysis of Android apps we took the list of gunSurgery, SpeculativeGeneralityProtocol, Swis- apps provided by Habchi et al. [3]. Since the list sArmyKnife and TraditionBreaker. only included app package names, we queried All- Below, we present and discuss the results for FreeAPK api3 to find and download these apps. We each research question separately. decided to search AllFreeAPK instead of GitHub, RQ 1: Are all types of object-oriented code as PAPRIKA uses APKs for analysis and this way smells present in both iOS and Android apps? we were able to skip the step of compiling these When comparing the occurrence of code smells apps. Later during the analysis we needed to dis- on each platform, we found that 18 of the 19 iden- card some of the very big apps due to performance tified code smells occurred in apps on both plat- issues. In total we included 694 open source An- forms, i.e., Android and iOS. Code smell Distort- droid apps in our analysis. edHierarchy never occurred in iOS apps. Our result does not fully support Mannan et al.’s 3.3. Data Analysis expectation that mobile apps on other platforms than Android should exhibit the same code smells To answer RQ1, we checked whether any of the 19 [2]. identified code smells occurred in at least one app RQ 2: Do code smells occur with the same on each platform. density in iOS and Android apps? To answer RQ2, we calculated the densities of The results of our code smell density analysis code smells for both iOS and Android apps and is shown in Figure 1. Accumulated over all code compared these. Code smell density was calcu- smells it turned out that the apps on the iOS plat- lated by counting the number of code smells (total form had a density of 41.7 smells/kilo-instructions and per code smell type) and dividing by the num- while the apps on Android only had a density of ber app instructions. 34.4 smells/kilo-instructions. This result is con- To answer RQ3, we had to perform several cal- trary to what Habchi et al. [3] expected. culations. To calculate the relative frequencies of Moreover, it can be seen from Figure 1 that the code smells per code smell type on each platform, code smell densities differ between iOS and An- we counted the code smells of a type in all apps droid. Code smells LazyClass, DivergentChange, and divided by the total code smell count. We did PrimitiveObsession and DataClass had a particu- this per platform. To calculate the code smell dis- larly high density in iOS apps. On the other hand, tributions on app and class levels per platform, we code smells LongMethod, LongParameterList and counted how many apps (and classes) contain at ShotgunSurgery were clearly more frequent in An- least one code smell of a certain type and then di- droid apps. The fact that code smell densities vided by the total number of apps (and classes). were sometimes higher and sometimes lower in iOS apps as compared to Android apps might be explained by the fact that Android apps tend to 4. Results have more of the code smells that correspond to more complex classes whereas iOS apps tend to We analysed 273 open source iOS apps using have more of the code smells that correspond to GraphifySwift and 694 open source Android apps more simple classes. using PAPRIKA and modified code smell queries RQ 3: Do code smell distributions differ be- from GraphifySwift to answer our research ques- tween iOS and Android apps? tions. We analyzed the apps with regards to Figure 2 shows the relative frequency of code 19 code smells: BlobClass, ComplexClass, Cyclic- smell occurrences over all apps on the Android ClassDependency, DataClass, DataClumpFields, platform (blue bars) and the iOS platform (red 3 https://m.allfreeapk.com/api/ bars). The results confirm what we had seen when 83 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) Code smell count per 1000 instructions 8 Android 80 Android iOS iOS % of apps with code smell 6 60 4 40 2 20 0 0 LongMethod LongParameterList ShotgunSurgery LazyClass SAPBreaker DivergentChange PrimitiveObsession DataClumpFields ComplexClass InappropriateIntimacy DistortedHierarchy BlobClass CyclicClassDependency SpeculativeGeneralityProtocol DataClass SwissArmyKnife ParallelInheritanceHierarchies TraditionBreaker MiddleMan LazyClass LongMethod LongParameterList ShotgunSurgery SAPBreaker DivergentChange PrimitiveObsession ComplexClass DataClumpFields BlobClass InappropriateIntimacy DistortedHierarchy CyclicClassDependency SpeculativeGeneralityProtocol DataClass SwissArmyKnife ParallelInheritanceHierarchies TraditionBreaker MiddleMan Figure 1: Comparison of code smell densities between Figure 3: Comparison of code smell frequencies on app Android (blue) and iOS (red) apps level between Android (blue) and iOS (red) 25 while only 7% of Android apps are affected), Mid- Android dleMan (15% of iOS apps are affected but only 1% of Android apps), and DistortedHierarchy (25 % of 20 iOS % of code smells 15 Android apps are affected but none of the iOS apps 10 is). 5 We found that the distributions of code smell oc- currences on class level are more different between 0 the platforms than on app level. This result might, LongMethod LongParameterList ShotgunSurgery LazyClass SAPBreaker DivergentChange PrimitiveObsession DataClumpFields ComplexClass InappropriateIntimacy DistortedHierarchy BlobClass CyclicClassDependency SpeculativeGeneralityProtocol DataClass SwissArmyKnife ParallelInheritanceHierarchies TraditionBreaker MiddleMan again, be explained by the fact that Android apps usually have larger classes and, thus, tend to have more of the code smells that correspond to more complex classes whereas iOS apps tend to have Figure 2: Code smell proportions on Android (blue) and more compact classes and, thus, tend to have more iOS (red) of the code smells that correspond to more simple classes. This effect is more prominent when doing the analysis on class level than on app level. In addition, we analyzed the occurrence of the we compared code smell densities: the proportions method-based code smells LongMethod and Long- of code smells differ between platforms. In addi- ParameterList separately. We found that in iOS tion, we see that code smells are more evenly dis- apps 9% of methods are considered LongMethod tributed in iOS apps as compared to Android apps. while this is the case for 14% of the methods in An- Then we analyzed how large the share of smelly droid apps. In iOS apps 5% of the methods have a apps on each platform is and how large the share LongParameterList while this is the case for 9% of of smelly classes is on each platform. We did these methods in Android apps. analyses for each code smell type separately. Figures 3 and 4 show the percentages of apps and classes, respectively, containing code smells of 5. Threats to Validity a certain type. We found that the percentages of smelly apps Internal Validity: In our case internal validity are relatively similar between platforms. The might be affected by how code smells are detected biggest differences occurs for code smell DataClass by the tools used. PAPRIKA has been used in mul- (79% of iOS apps have at least one affected class tiple studies [15, 13, 3, 14]. GraphifySwift was in- 84 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) GraphifySwift is open source and also available on 40 the tool GitHub page. For Android analysis we used the list of apps analysed by [3], the list of suc- Android % of classes with code smell iOS 30 cessfully analysed apps can be found in the list of 20 apps4 . PAPRIKA is open source and also available on GitHub. The adapted code smell queries used 10 for Android analysis can be found in the list of An- 0 droid code smell queries5 . LazyClass LongMethod LongParameterList ShotgunSurgery SAPBreaker DivergentChange DataClumpFields ComplexClass InappropriateIntimacy PrimitiveObsession DistortedHierarchy BlobClass DataClass CyclicClassDependency SpeculativeGeneralityProtocol ParallelInheritanceHierarchies SwissArmyKnife TraditionBreaker MiddleMan 6. Conclusion Mannan et al. [2] analysed the density and dis- tribution of code smells in Android apps. We cal- Figure 4: Comparison of code smell frequencies on class level between Android (blue) and iOS (red) culated a similar density and distribution for iOS and Android apps and saw that these densities and distributions were different. Additionally we dis- covered that one of the code smells analysed by troduced in [5] and validated by replicating results Mannan et al. was not present in iOS apps. in [3]. We adapted code smell queries defined in Habchi et al. [3] compared ratios of code smell [5], but did so by not changing the code smell def- occurrences on iOS and Android. We extended initions themselves. their research by adding additional code smells External Validity: We analysed open source to the analysis and found that code smell occur- apps. For swift the analysis can only be performed rences are not always higher in Android apps. For if the code of the app is accessible. For Android the some code smells they were higher in iOS apps. analysis could be performed on apps from the app This shows that Android apps are not necessar- store. Therefore for both platforms open source ily smellier, but different kinds of code smells are apps were chosen. Previously [5] it was shown that more prevalent depending on the platform. although there are some differences between apps These results can be interesting for developers that are on the app store the differences are small. moving from one platform to the other. It can also On the iOS platform we only analyzed apps written be useful for developers of tools for these plat- in Swift. Given that Objective-C and Swift code is forms. We see that the emphasis on which code quite similar, we assume our results extend to apps smells to look at is different depending on the plat- written in Objective-C. form. Construct Validity: GraphifySwift uses stan- dard definitions of code smells found in literature [5]. In code smell queries we use thresholds calcu- Acknowledgments lated based on the app set analysed. Using thresh- olds is a common approach for detecting code This research was partly funded by the Estonian smells. We used the same method to determine Center of Excellence in ICT research (EXCITE), thresholds as was used by Hecht et al. [13], Habchi the IT Academy Programme for ICT Research De- et al. [3] and [5]. Thresholds might differ between velopment, the Austrian ministries BMVIT and languages, but since they are calculated based on BMDW, and the Province of Upper Austria un- the current set of apps analysed language specific der the COMET (Competence Centers for Excel- differences should be resolved. lent Technologies) Programme managed by FFG, Reliability: For iOS analysis we used the same collaborative list of open source iOS apps written 4 https://figshare.com/articles/dataset/iOS_and_Android_ in Swift as was used in [5]. All these apps are app_analysis_data/13103012 5 https://figshare.com/articles/conference_contribution/ available on GitHub. The list of successfully anal- GraphifySwift_queries_adapted_for_PAPARIKA_for_ ysed apps can be found on the tool GitHub page. Android_code_smell_analysis/13102994 85 8th International Workshop on Quantitative Approaches to Software Quality (QuASoQ 2020) and by the group grant PRG887 of the Estonian Re- ACM, 2014, pp. 232–243. search Council. We thank Rudolf Ramler for the [9] M. Tufano, F. Palomba, G. Bavota, R. Oliveto, thorough review of a previous version of this pa- M. Di Penta, A. De Lucia, D. Poshyvanyk, per. When and why your code starts to smell bad, in: Proc. of the 37th Int’l Conf. on Software Engineering-Volume 1, IEEE Press, 2015, pp. References 403–414. [10] T. Sharma, Extending Maintainability Analy- [1] M. Fowler, Refactoring: improving the de- sis Beyond Code Smells, Ph.D. thesis, 2019. sign of existing code, Addison-Wesley Profes- [11] M. Gottschalk, J. Jelschen, A. Winter, Saving sional, 2018. energy on mobile devices by refactoring., in: [2] U. A. Mannan, I. Ahmed, R. A. M. Almurshed, EnviroInfo, 2014, pp. 437–444. D. Dig, C. Jensen, Understanding code smells [12] M. Ghafari, P. Gadient, O. Nierstrasz, Security in android applications, in: 2016 IEEE/ACM smells in android, in: 2017 IEEE 17Th Int’l Int’l Conf. on Mobile Softw. Eng. and Systems Working Conf. on Source Code Analysis and (MOBILESoft), IEEE, 2016, pp. 225–236. Manipulation (SCAM), IEEE, 2017, pp. 121– [3] S. Habchi, G. Hecht, R. Rouvoy, N. Moha, 130. Code smells in ios apps: How do they com- [13] G. Hecht, O. Benomar, R. Rouvoy, N. Moha, pare to android?, in: 2017 IEEE/ACM 4th L. Duchien, Tracking the software quality of Int’l Conf. on Mobile Softw. Eng. and Systems android applications along their evolution (t), (MOBILESoft), IEEE, 2017, pp. 110–121. in: 2015 30th IEEE/ACM Int’l Conf. on Auto- [4] G. Hecht, An approach to detect android an- mated Softw. Eng. (ASE), IEEE, 2015, pp. 236– tipatterns, in: Proc. of the 37th Int’l Conf. on 247. Software Engineering-Volume 2, IEEE Press, [14] B. G. Mateus, M. Martinez, An empirical 2015, pp. 766–768. study on quality of android applications writ- [5] K. Rahkema, D. Pfahl, Empirical study on ten in kotlin language, Empirical Software code smells in ios applications, in: Proc. Engineering (2018) 1–38. of the IEEE/ACM 7th Int’l Conf. on Mobile [15] G. Hecht, R. Rouvoy, N. Moha, L. Duchien, Softw. Eng. and Systems, MOBILESoft ’20, Detecting antipatterns in android apps, in: Association for Computing Machinery, New Proc. of the Second ACM Int’l Conf. on Mo- York, NY, USA, 2020, p. 61–65. bile Softw. Eng. and Systems, IEEE Press, [6] F. Khomh, M. Di Penta, Y.-G. Gueheneuc, 2015, pp. 148–149. An exploratory study of the impact of code smells on software change-proneness, in: 2009 16th Working Conf. on Reverse Engi- neering, IEEE, 2009, pp. 75–84. [7] S. Olbrich, D. S. Cruzes, V. Basili, N. Zaz- worka, The evolution and impact of code smells: A case study of two open source sys- tems, in: 2009 3rd Int’l symposium on empir- ical software engineering and measurement, IEEE, 2009, pp. 390–400. [8] M. Linares-Vásquez, S. Klock, C. McMillan, A. Sabané, D. Poshyvanyk, Y.-G. Guéhéneuc, Domain matters: bringing further evidence of the relationships among anti-patterns, ap- plication domains, and quality-related met- rics in java mobile apps, in: Proc. of the 22nd Int’l Conf. on Program Comprehension, 86