Supporting user-oriented analysis for multi-view domain-specific visual languages

Esther Guerra, Juan de Lara, Alessio Malizia, Paloma Díaz

Research output: Contribution to journalArticlepeer-review

38 Citations (Scopus)

Abstract

The integration of usable and flexible analysis support in modelling environments is a key success factor in Model-Driven Development. In this paradigm, models are the core asset from which code is automatically generated, and thus ensuring model correctness is a fundamental quality control activity. For this purpose, a common approach is to transform the system models into formal semantic domains for verification. However, if the analysis results are not shown in a proper way to the end-user (e.g. in terms of the original language) they may become useless. In this paper we present a novel DSVL called BaVeL that facilitates the flexible annotation of verification results obtained in semantic domains to different formats, including the context of the original language. BaVeL is used in combination with a consistency framework, providing support for all steps in a verification process: acquisition of additional input data, transformation of the system models into semantic domains, verification, and flexible annotation of analysis results. The approach has been validated analytically by the cognitive dimensions framework, and empirically by its implementation and application to several DSVLs. Here we present a case study of a notation in the area of Digital Libraries, where the analysis is performed by transformations into Petri nets and a process algebra.
Original languageEnglish
Pages (from-to)769-784
Number of pages16
JournalInformation and Software Technology
Volume51
Issue number4
DOIs
Publication statusPublished - Apr 2009

Keywords

  • Back-annotation
  • Consistency
  • Domain-specific visual languages
  • Formal methods
  • Model transformation
  • Modelling environments

Fingerprint

Dive into the research topics of 'Supporting user-oriented analysis for multi-view domain-specific visual languages'. Together they form a unique fingerprint.

Cite this