Design

The Data Modeling Toolchain can be basically summarized as a (potentially large) set of code generators. Code generators, in turn, are known to usually suffer from a serious problem: rigidity. Since they handle the issue of automatic code generation at a meta-level (at the level of ``notions'' that drive the code genesis) it is difficult to change their behaviour and extend them. In fact, the whole process is usually so delicate that in most cases only the creator of the code generator knows enough to modify it - even within the walls of the company building it, the people ``allowed'' to work on it are usually a select elite.

This could not be tolerated in ASSERT - in fact, the project leader (ESA) immediately highlighted the importance of ``tweakability'': the code generators were not only supposed to cover the functional requirements of the project, they had to do this in an extendable and modular way. In fact, the desire was expressed for a scaffolding that would allow the end user - and not just the developer - of the toolchain to easily modify existing code generators or create new ones.

To accomodate this, a domain specific language was initially adopted. This language was a simple extension of the ubiquitous Python scripting language. It allowed the developer of the code generators to access information on all the entities stored in the AADL and ASN.1 parse trees, and use any Python constructs to process them and generate output (if/elif/else, try/except/, for loops, etc).

As the code generators for specific backends (SCADE, ObjectGeode, etc) were being written, new needs arose: it became clear that static code analysis as well as statement coverage would provide significant help in identifying errors and hidden traps. These however could only be executed in the project's alloted timeframe if the domain specific language stopped being 'almost Python' and became true Python (which comes with ready made versions of these tools included).