ModelByVoice - towards a general purpose model editor for blind people João Lopes João Cambeiro Vasco Amaral DI FCT DI FCT NOVA LINCS, DI FCT Universidade NOVA de Lisboa Universidade NOVA de Lisboa Universidade NOVA de Lisboa Lisboa, Portugal Lisboa, Portugal Lisboa, Portugal jr.lopes@campus.fct.unl.pt jmc12976@campus.fct.unl.pt vma@fct.unl.pt Abstract—Context: Current modelling technologies, with the assume that in general, a software engineer will not initially support of modelling frameworks, are in the base of the current have physical limitations that prevent him from handling adoption of Model-Driven Software development - MDD - and with computers. However, if we consider visually impaired supporting Software Engineering phases. Problem: The focus of these tools are solely on graphi- people who need to use these modelling languages, it is very cal support and visual models. In fact, the chosen modelling complicated to use those languages due to the lack of adapted language’s concrete syntax is either graphical or textual or software to these limitations. Typically, language developers both. This approach is discarding the use of other senses for delegate the responsibility of supporting disabilities to third- modelling purposes, and, for instance, the possibility of blind party general purpose software, which focuses on textual software engineers to take advantage of modelling and deal with the abstractions captured by those. It is necessary to reading, which may not be sufficient because the integration improve the productivity of people with limitations or disabilities with the modelling platforms can be cumbersome, not bringing while modelling.They should not be excluded from the modelling the required productivity in the modelling effort. activity. This situation of accessibility barriers starts already at According to Stack Overflows survey in 2018 [2] of over education of Modelling. 100,000 software developers that participated in this study, 1.4 Method: In this paper we present a prototype of a tool that aims to take advantage of current voice recognition and speech % are blind or have difficulty seeing. The small percentage of synthesis to edit models in diverse modelling languages. The these professionals makes it economically non-viable to build elegance of this work is the fact that, not only it is meant to dedicated software, which justifies the lack of products not make MDD accessible to a broader spectrum of practitioners, only in the daily life but also to support for those software but also it is developed with an MDD approach. engineers in their professional activity. Software modelling, Results: A prototype was built, named ModelByVoice. This tool is not bound to a particular modelling language, as long as one of the phases in the software development life-cycle, as it is meta-modelled. ModelByVoice is the base for a new tool is perhaps one of the most affected by this lack of support. that will enable MDD highlighting the relevant human factor of The diagrammatic languages modelling workbenches such accessibility via voice and audio to models. Ultimately, it aims as Eclipse GMF / EMF [3], [4], DSL Tools [5], AToMPM [6], at bringing accessibility for blind people to deal with MDD and GME [7], MetaEdit+ [8], or their textual counterparts such Domain Specific (Modelling) Languages - DS(M)Ls - the same way it is already done with diagrammatic languages with the as Xtext [9] for Eclipse or Meta-Programming System(MPS) current Modelling workbenches. [10], do not provide the support for blind people to model Index Terms—Model-Driven Software Development, Modelling diagrams through resources such as their voice or touch. The Workbenches, Accessibility, Speech Generation and Synthesis, concrete syntax of these languages focuses only on visual Audio Models aspects, that is, they require a visual analysis, ignoring other senses like the sound and touch, which makes modelling I. I NTRODUCTION activity extremely difficult for the blind people or visually A research study carried out by IBM in 2004 [1] emphasises impaired. We argue that Audio/Voice is yet another aspect the idea that modelling software is, and will continue to be, of Human interaction in modelling that should be properly in the light of many engineers’ perspective, the foundation for supported in modelling. Therefore, we propose to create a tool dealing with the complexity of systems. The key is Abstrac- to use voice synthesis and recognition in a more structured tion. Thus, the modelling activity proves to be fundamental, way than just the one-dimensional approach of mapping text since it allows not only a better understanding and clarification syntax solutions (one-to-one) instead of using the natural 2D of what one intends to develop but also to define a plan structure of the models (mostly graphs) abstract syntax. This with the requisites and necessary functionalities for the system approach should support easier manipulation of models for to be implemented. Ultimately, the abstractions captured by the end-user. It must support edition operations, equivalent the models are represented thanks to visual (mostly diagram- to the ones already found in the textual/graphical editors, matic) languages. Both general purpose and Domain Specific like CRUD’s create, update, delete, and others like select, (Modelling) Languages -DS(M)Ls- have a visual nature and navigate or query. The MDD (Model-Driven Development) approach was the technique adopted for the creation of the stated above, the ModelByVoice developing process was very platform since this concept promotes the systematic reuse of challenging, motivating and innovating task, since the search components. Beydeda et.al [11] describe the MDD technique for new concepts and paradigms is almost always associated by saying that it is based on models that are considered the with the construction of a more adjusted reality and following main elements of the development process. These models aid the demands of today’s world. in thinking about the problem domain and design a resolution According to the documentary research carried out, the in the solution domain, providing abstractions of a physical tool that most closely resembles the features and resources of system that allows engineers to focus on critical aspects of the ModelByVoice is the VoiceToModel tool [16]. This tool the same system. was designed to enable the visually impaired requirements In this paper, we will present our tool prototype, named engineers to derive KAOS models, conceptual models, and ModelByVoice, that allows the visually impaired people to feature models. It uses mechanisms of speech recognition and model diagrams or systems with any modelling language (non- synthesis. The user, through his voice, enunciates the execution hardwired to a specific language), through their voice thanks to commands associated with the platform to create the type of a speech recognition system implemented on the tool. The tool model that he wants. Subsequently, an audio response is given was created under the assumption that the supported domain- to the user, which serves as feedback associated with the voice specific languages (DSL) are meta-modelled, as the current commands. Its great limitation compared to ModelByVoice, is focus of the approach is on the language’s abstract syntax that it only allows the creation and editing of diagrams that are instead of concrete syntax. composed for one language of the set of languages described The rest of this paper is organised as follows. In section II above, which means that, is limited to three languages, while we discuss the current efforts that to the best of our knowl- with ModelByVoice, theoretically, we can use any desired edge contribute to tackling the problem towards supporting language as long as it is metamodel is defined. modelling in software engineering for blind people. In section III we present an overview of our prototype. In section IV we III. S OLUTION OVERVIEW discuss a preliminary assessment of our prototype tool. Finally, A. Architecture in section V we conclude and discuss future challenges to address. At the architecture level, the ModelByVoice platform is implemented in three layers. The first layer is related to the II. S ATE - OF - THE - ART voice recognition process. The platform is continuously cap- The current state-of-the-art does not provide tools that turing the sound generated by the surrounding environment. make it possible for the visually impaired to model diagrams When a speech utterance is detected, the sound is converted satisfactorily. Some tools were developed to circumvent this to text, and the result is delivered to the second layer. The problem, but in the most of cases, the success rate was reduced. second layer then tries to match the received user command As an example, we have the Technical Diagram Understanding to an operation defined by the modelling language. If a valid for the Blind, as known as TeDUB [12]. This project was operation is detected, this layer is responsible for the execution developed to provide a UML modelling tool accessible to of the operation. The feedback mechanism is implemented visually impaired individuals. Mainly a visualisation tool, the in the third layer, and the platform uses speech synthesis to users created and explored the diagrams through a joystick communicate the input commands results back to the users. In or a keyboard [13], which means that the creators of this situations where an invalid command is detected, the system tool adopted a haptic communication approach for the domain alerts the user, and it prompts the user to a new attempt. users. The models should be persisted in XMI format so that it could work with models exported from tools like Rational B. Technology Rose or Poseidon UML. As the tool was a model navigator and To create the necessary basis to start the developing process not a proper model editor, the project did not have the expected of the domain specific language, we used the Epsilon [17], success due to the poor adherence of the domain users, and which is an inter-operable and consistent set of languages it frequently failed when had to handle a significant amount for model-driven engineering. This toolset is grounded on of data. The project stopped in 2005 with a claim from the the Eclipse Modelling Framework, or EMF [3] which is an authors that this sort of solutions [14] would have more success open source code generation framework in Eclipse based on a if they were merely transformation tools to export the models structured data model called Ecore. The representation of the into HTML code, maintaining the links and use of plain text, models in EMF is achieved by making use of the Ecore meta- as it would be more inter-operable with current screen-readers modeling language (a ”de facto” implementation of MOF) [3]. that already read web pages. Concerning the usefulness of this tool for the development Another approach to this problem, named PRISCA [15], of ModelByVoice, the EMF proved to be fundamental as we tried to circumvent this obstacle through 3D printing. The precisely define the metamodel of the modelling language with users through their touch, once confronted with the diagrams Ecore. ModelByVoice makes use of the EMF with the purpose in 3D, tried to interpret them, but the solution proved to of reading any language as long as it has its metamodel defined be expensive, slow and ineffective. Thus, given the reasons with Ecore. This is a key functionality and highlight that we programmers that the names of variables and classes result from the composition of more than one common word. For example, getListOfClients could be used to name a function. Google Cloud Speech-to-Text only supports words that are present in the English dictionary, and as such we restrict the set of variables names to words that belong to the English dictionary. The FreeTTS tool [21] is used to implement the speech synthesis component. FreeTTS is an open source tool that converts text to audio, by voice synthesis. This system was implemented with the purpose of giving feedback about the execution of the operations and the actual state to the user during the modelling of the systems. The MBROLA voice system [22] was also integrated into this tool. This project contains several synthesised voices, in different languages and genres. It was decided to use the English male voice, and in this system, voices are less robotic than the standard voice of FreeTTS, getting closer to the recording of the human voice. Fig. 1. ModelByVoice Architecture C. Functionality An intended key aspect of this platform is that it gives the make use in our platform, being an advantage when compared possibility to the user to model with any modelling language to other approaches. as long as its metamodel exists. The platform, through the We used the Epsilon Transformation Language (ETL) reading of the meta-model of that language, allows making the [18] and Epsilon Generation Language (EGL) [19]. These association of the elements of this language, for the standard languages belong to Epsilon. ETL was used to associate notions under described. (compose), via model transformatiom, which elements of the At the operation level, it is possible for the user to model entry metamodel (metamodel of the language with which one diagrams based on the notions of node, link, compartment and intends to model) are nodes, links and / or compartments (as attribute. A node represents a particular entity or element, a explained in subsection III-C), elements that will compose the link allows to make the connection between nodes and / or final metamodel. For example, if we want to model using the compartments, a compartment represents a hierarchical set of state machine language (formed by states and transitions), we nodes, that is, it is a node composed of other nodes, and, have to associate by equivalence, the notions of state to a finally, an attribute can be associated with one or more nodes. node, and transition to a link. Each time we want to have In the figure 2 we have represented the standard language a modelling environment for a particular language, it will metamodel, which covers most elements of the languages, be necessary to generate the platform code with EGL from by association, and that for most input languages, it will be its metamodel(result of composing the previously referred the output metamodel after the conversion of the respective elements). In this case, this tool was used mainly to embed the metamodel original input. As explained before, we have to types of modelling language elements that are to be modelled, convert the metamodel of the language that we want to in which the data structure chosen to store this information model, by associating the composing elements to the notions were two lists, one for the nodes and another for the links, represented on the following metamodel, which is in the most thus generating the program code adapted to that modelling of cases, the final metamodel. language, and which restricts the user to the language domain. The ModelByVoice functionality is based on operations that As said before, the ModelByVoice platform is composed are performed from associated voice commands. It is required of speech recognition and synthesis system. The technological that the edited languages support voice commands for CRUD tools used for the speech recognition system were the Sphinx-4 (create, read, update, delete) operations on the diagram and its [20] together with the Google Cloud Speech-to-Text. The first elements, giving the user the ability to create, listen (”read”), one, Sphinx-4 , is used for the generic application commands edit and remove the variables that compose the elements of such as create, remove and save commands, for example, while each model. In the following figure 3, we have the supported the Google Cloud Speech-to-Text is used to recognise the commands of the platform. variables of the elements that will compose the diagram, such The primary challenge found in the implementation process as the name and the type of objects that user can create or was the way in which the user would navigate the diagrams edit. The interaction between the user and ModelByVoice is and perform the desired operations. The notion of ”navigation established in the English language, because it is a universal element” was used to overcome this problem. This element, language and may cover a more significant number of potential which may be a node or a compartment (which is also a node), users around the world. Is it a standard procedure among acts as a reference and guidance point for the user to explore Fig. 2. Standard Meta Model a previously created diagram for edition, the oldest element in that diagram is called the navigation element. Therefore, this was the idealised and chosen form, to avoid that the user gets lost during the creation, exploration, and edition of the diagrams. The diagrams are presented to the user trough the list dia- gram command. This command allows to list all the diagram content trough voice synthesis, or part of it, where the user have to state the range number of elements to list by the platform. Another essential implemented function was the help mode command. This mode allows users to be guided during nav- igation, calculating the possible commands that the user can enunciate from the state in which the diagram is. Each time the user announces a voice command, the oper- Fig. 3. Sphinx Commands Grammar ation associated with this command will be executed by the platform, and then the voice synthesis will issue a response, the diagrams. The election of the navigation element is based so that, the user will know about the success of the operation on three possibilities: i) when a new node/compartment is cre- execution. The provision of feedback during the execution of ated, it is assigned the role of navigation element; ii) the user the platform allows the user not to become lost during the by enunciating the command ”change navigation element” can execution of the operations. designate which will be the new current navigation element Once the user executes all the desired operations, there is the through the enunciation of its name by a voice command; iii) necessity to call the save diagram command. This command when the navigation element is deleted, and the user loads will save the diagram in XMI (XML Metadata Interchange) format, in the directory of the machine where the platform is finally from the green to the red state. After that, the subjects located. had to list the diagram to verify that all elements were created In the figure 4 we have represented through an activity correctly. If all parts were created with success, they would diagram, the possible executions of the ModelByVoice and be asked to create an attribute with the name seconds, with its interaction with the user. All the possible operations are integer type, to represent the time that each of the semaphores represented in this activity diagram, as well as all the options would be active. In the figure 8 we have a diagrammatic that the user can take during the process of the platform. representation of the process for the first task performed by In turn, in the figure 5 we have the internal representation the subjects. of the help mode available to the user. This helpful resource is It was decided to perform this first task, with the purpose intended to calculate the possible operations to perform from a of the users to get to know the platform, its operation, and given element, and to inform the user about which commands the commands that they can execute. The first blind subject can be executed. Figures 6 and 7 are session logs to exemplify took about 7 minutes to complete the task, the second blind the models creation and edition by the operator while using subject took around 10 minutes, and the rest of the non-blind our tool prototype. subjects, took on average, 5 minutes to complete the proposed IV. P RELIMINARY ASSESSMENT task. A possible explanation for the deviation in the time observed in the first group is that the first blind subject took With the collaboration of the Portuguese Association for Vi- less than the second because he had academic qualifications sual Impairment, including blindness and amblyopia, ACAPO and was more familiar with the modelling activity. Concerning - Associao dos Cegos e Amblopes de Portugal, two blind the explanation for the difference between groups, the non- users were involved in a preliminary assessment session where blind users took less time than the blind, can be explained they had the chance to use the prototype and answer to a by the fact that they have more practice and freshness in questionnaire. diagram modelling. Once this task was performed, it was given The profile of the subjects to perform the usability tests the opportunity to model with a modelling language of their of the tool was divided into two categories, blind or visually choice, which is the second and the last task that was executed. impaired users, and users without any visual limitation. Rel- The first blind user opted to model with a language that atively to the first profile, one subject was MSc. in Software he learned in his computing course, the UML class diagram Engineering with previous experience with modelling using language. The subject explored all the possible commands of Graphviz [23] and TeDUB with keyboard and joystick. The the platform, and it took him about 10 minutes to try out second subject did not have a formal education in Computer all the possible executable commands. In general, all users Science and Engineering. The second group of users, with- were comfortable with the tasks involved, except the second out visual limitations, were three MSc students of Software blind subject, who was not confident in the second proposed Engineering. task and preferred not to execute it because he did not know The selection process of the participants began by defining a which modelling language to choose for not being familiar set of prerequisites, which the users would have to satisfy. The with Software Engineering. requirements outlined were as follows: the user must present a basic knowledge of the English language; there must be total After the experiment, all users answered a questionnaire. unfamiliarity about the platform; and finally, the need for basic The questionnaires assessed whether if they liked the experi- theoretical-practical notions in the computing area. ment if ModelByVoice was challenging to use, if they would Before the experiment, it was established that the first recommend the platform to other users, among other aspects. task would be to conduct interviews with the users. The One of the open box questions was about the strong and interviews objectives were the verification if the subjects weak points of the platform. All users highlighted the concept effectively fulfilled the prerequisites defined, and collect per- in the tool of the navigation element, also, the simplicity sonal information such as age, experience in the area and if of the commands (not hard to remember), the existence of they had previous experience with modelling tools that have a help mode and the quality of the feedback given by the integrated speech recognition and/or synthesising mechanisms. platform. Concerning the weak points, the majority of users Regarding usability tests, two tasks were outlined, both similar, reported some glitches on the speech recognition technology, but for two different languages. The first usability test task and in their opinion, this resource is quite annoying after some involved modelling with the state machine language (states time expended on modelling. Another weak point was the and transitions) because it is a simple language and involves fact that there is no interruption of the feedback during the few concepts. The problem proposed was to create a state execution, this is, the user must wait for the end of the response machine that represented all the possible states and transitions of the speech synthesiser, even if he has already heard the of a traffic light. First, the subjects had to produce a diagram information he would like to know. named Traffic Light. They had to create three state-type nodes, Their suggestions for implementing change, was the use of and assign the names red, yellow, and green. Subsequently, the keyboard, instead of the voice recognition as input re- they generated three transitions, one from the red state to the source, and the ability to start and pause the voice recognition yellow state, another from the yellow to the green state, and at any time with the keyboard as well. Fig. 4. Activity Diagram of the ModelByVoice Fig. 5. Internal execution of the help mode Fig. 6. Example of a session using the console in debug mode - model creation A. Limitations The evaluation included only two blind people, which is However, as a preliminary assessment meant to guide us in not statistically relevant. The reduced numbers of participat- future iterations of the tool, we opted to do so as the users ing subjects are a threat to the validity of the preliminary have never used a voice recognition modelling application, assessment. However, this reflects the difficulty of contacting and because one of them has not even experienced before the software engineers that are severely sight impaired. Still thanks modelling activity. As expected, despite the reduced subject to ACAPO, we managed to contact the two users involved numbers, the performed tests showed promising and confident in the experiment. The number of modelling tasks performed results and turned out to be valuable feedback for future in the exercises can also be a threat due to its simplicity. improvements that are underway. lowed us to get a first feedback on the previously mentioned characteristics, since the users approved the platform, and left good suggestions of change and confidence for a future continuous development work. B. Limitations and challenges As mentioned in sectionIV, the main limitations pointed out by the subjects in our preliminary experiment, were related to technical failure (error rate) of the existing underneath speech recognition technology, while running ModelByVoice. The expert users (with extensive practice with software technology and used to use the keyboard) have suggested using the keyboard as an input component, as it may prove to be faster and more efficient when executing commands, instead of the speech recognition. It should be noted that this limitation in speech recognition is due to the fact that it is limited to an open source voice tool (Sphinx), and a voice tool (Google Cloud Speech-to-Text) that, despite being more efficient than Fig. 7. Example of a session using the console in debug mode - edition with the previous one, is also not 100 % reliable in translating the create, remove and navigation voice to text. Another interesting observation, besides the need for some improved interaction mechanisms (like pause), is the request for introducing audio signals instead of pure voice synthesis feed-back. We foresee that an interesting approach to deciding for the adequate concrete syntax, in this case, is to use an adaptation of the proposed design principles of PoN (Physics of Notations) [24] to audio. Those principles are currently used to evaluate, compare and enhance the commu- nication properties a given software modelling language when designing its visual notations (concrete syntax). C. Future Work Taking into consideration the feedback and suggestions given by the users about the ModelByVoice, some features may prove to be interesting to implement in the future. Among them we have to consider giving the users the possibility to choose the input resource that they intend to use to practise the modelling activity, that is, to allow them to select the voice recognition mechanism or the keyboard as an input resource, or even combine both input sources at the Fig. 8. State Machine Modelling Task - Traffic Light same time. Another interesting upgrade in the tool’s architecture is the introduction of an intermediate speech recognition platform V. C ONCLUSIONS AND FUTURE CHALLENGES independent layer, that would support any speech recognition A. Contributions API. This way, the tool would be easily reconfigured to handle with any speech recognition tool. This work presents a prototype tool that allows people with Additionally, it would be interesting, and straight away, to visual impairment to create and edit diagrams for any type of create an editor that could convert the created diagrams (which modelling language as long as this language is meta-modelled. are in XMI) and generate the diagram in graphical mode. This The goal is to provide accessibility to the modelling activity would allow the diagrams to be analysed by other entities that can be reached both for blind people (and with problems or people who would eventually analyse or evaluate those with vision) and users without physical limitations. With diagrams. this tool, it is possible to edit and query(navigate) diagrams, Finally, as a more challenging future work, this project via voice. We expect that this will be an enabler for the raises the issue of what should be the systematic approach to full integration of blind people into group projects involving assess audio concrete syntax and interaction model regarding modelling. its usability. We argue that it should be a similar framework to The preliminary usability tests or evaluations with blind PoN. In our perspective, an interesting line of research could subjects and non-blind, although with anecdotal figures, al- be followed in this direction. ACKNOWLEDGMENT [18] D. S. Kolovos, R. F. Paige, and F. Polack, “The epsilon transformation language,” in International Conference on Theory and Practice of Model The authors would like to thank NOVA LINCS Transformations. Springer, 2008, pp. 46–60. Research Laboratory (Grant: FCT/MCTES PEst UID/ [19] L. M. Rose, R. F. Paige, D. S. Kolovos, and F. A. Polack, “The epsilon generation language,” in European Conference on Model Driven CEC/04516/2013) and DSML4MAS Project (Grant: Architecture-Foundations and Applications. Springer, 2008, pp. 1–16. FCT/MCTES TUBITAK/0008/2014). [20] W. Walker, P. Lamere, P. Kwok, B. Raj, R. Singh, E. Gouvea, P. Wolf, The authors would also like to thank ACAPO (Associação and J. Woelfel, “Sphinx-4: A flexible open source framework for speech recognition,” SMLI TR-2004-139, Sun Microsystems, Inc., Tech. Rep., dos Cegos e Amblı́opes de Portugal) for providing us the 2004. contact of software professionals, and for their availability to [21] W. Walker, P. Lamere, and P. Kwok, “Freetts: a performance case study,” test our tool. SMLI TR-2002-114, Sun Microsystems, Inc., Tech. Rep., 2002. [22] Mbrola. [Online]. Available: http://tcts.fpms.ac.be/synthesis/mbrola.html R EFERENCES [23] J. Ellson, E. Gansner, L. Koutsofios, S. C. North, and G. Woodhull, “Graphvizopen source graph drawing tools,” in International Symposium [1] G. Cernosek and E. Naiburg, “The value of modeling,” IBM White Paper. on Graph Drawing. Springer, 2001, pp. 483–484. Retrieved July, vol. 31, p. 2008, 2004. [24] D. Moody, “The physics of notations: toward a scientific basis for [2] Stackoverflow, “Stats and analysis,” Mar. 2018. [Online]. Available: constructing visual notations in software engineering,” IEEE T Software https://insights.stackoverflow.com/survey/2018/#overview Eng, vol. 35, no. 6, pp. 756–779, 2009. [3] D. Steinberg, F. Budinsky, E. Merks, and M. Paternostro, EMF: eclipse modeling framework. Pearson Education, 2008. [4] D. S. Kolovos, L. M. Rose, S. B. Abid, R. F. Paige, F. A. Polack, and G. Botterweck, “Taming EMF and GMF Using Model Transformation,” in Model Driven Engineering Languages and Systems SE - 15, ser. Lecture Notes in Computer Science, D. C. Petriu, N. Rouquette, and Ø. Haugen, Eds. Springer Berlin Heidelberg, 2010, vol. 6394, pp. 211–225. [Online]. Available: http://dx.doi.org/10.1007/978-3-642- 16145-2 15 [5] S. Cook, G. Jones, S. Kent, and A. Wills, Domain-specific Development with Visual Studio Dsl Tools, 1st ed. Addison-Wesley Professional, 2007. [6] J. de Lara and H. Vangheluwe, “Atom3 : A tool for multi-formalism and meta-modelling,” in Fundamental Approaches to Software Engineering, 5th International Conference, FASE 2002, held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2002, Grenoble, France, April 8-12, 2002, Proceedings, 2002, pp. 174–188. [Online]. Available: https://doi.org/10.1007/3-540-45923-5 12 [7] V. University, “Gme: Generic modeling environment,” 2007. [Online]. Available: http://www.isis.vanderbilt.edu/Projects/gme/ [8] S. Kelly, K. Lyytinen, and M. Rossi, “Metaedit+ a fully configurable multi-user and multi-tool case and came environment,” in 8th Inter- national Conference on Advanced Information Systems Engineering, CAiSE’96, vol. 1080/1996. Heraklion, Crete, Greece: Springer Berlin / Heidelberg, 1996, pp. 1–21. [9] M. Eysholdt and H. Behrens, “Xtext: Implement your language faster than the quick and dirty way,” in Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion, ser. OOPSLA ’10. New York, NY, USA: ACM, 2010, pp. 307–309. [Online]. Available: http://doi.acm.org/10.1145/1869542.1869625 [10] S. E. Dmitriev, “Language oriented programming : The next programming paradigm,” Onboard, Jetbrains, Tech. Rep., 2004. [Online]. Available: http://www.onboard.jetbrains.com [11] S. Beydeda, M. Book, V. Gruhn et al., Model-driven software develop- ment. Springer, 2005, vol. 15. [12] H. Petrie, C. Schlieder, P. Blenkhorn, G. Evans, A. King, A.-M. ONeill, G. Ioannidis, B. Gallagher, D. Crombie, R. Mager et al., “Tedub: A system for presenting and exploring technical drawings for blind people,” Computers helping people with special needs, pp. 47–67, 2002. [13] A. King, P. Blenkhorn, D. Crombie, S. Dijkstra, G. Evans, and J. Wood, “Presenting uml software engineering diagrams to blind people,” in International Conference on Computers for Handicapped Persons. Springer, 2004, pp. 522–529. [14] U. of Manchester, “Tedub and accessible uml,” 2005. [Online]. Available: http://www.alasdairking.me.uk/tedub/index.htm [15] B. Doherty and B. Cheng, “Uml modeling for visually-impaired per- sons,” in CEUR Workshop Proceedings, vol. 1522. CEUR-WS, 2015, pp. 4–10. [16] F. Soares, “Uma abordagem para derivar modelos de requisitos a partir de mecanismos de reconhecimento de voz,” Master’s thesis, Faculdade de Ciências e Tecnologia da Universidade de Lisboa, 2014. [17] D. S. Kolovos, R. F. Paige, and F. A. Polack, “Eclipse development tools for epsilon,” in Eclipse Summit Europe, Eclipse Modeling Symposium, vol. 20062, 2006, p. 200.