Knowledge is an organization’s understanding of information and data that is crucial to their software products. How should an organization best capture this knowledge?
They have to scope the knowledge, they have to structure in some systematic way, they have to make it accessible to the people in the organization, they have to evolve it over time, and ultimately, they have to do something with it, typically encode it into software as part of their products.
The traditional approach is to write things down as prose, with a couple of explanatory diagrams, sometimes formulas, and often lots of tables full of numbers. Text can be consumed by anyone, but a prose representation does not let you do anything with the knowledge. You cannot check it for completeness or consistency, you cannot transform it into different representations, and of course you cannot execute it. You can only display, print and read.
The other mainstream approach is to encode the knowledge directly in program code. This lets you execute it, but execution is pretty much the only thing you can do with it. It is very hard to reverse engineer the domain-level semantics, which makes meaningful analysis hard. Source code is also not very understandable to non-programmers such as your analysts and domain experts. They might revert back to writing text – now called requirements. It is also hopeless to try to extract the knowledge from program text and transform it into a different representation, such as source code in a different programming language. Really, encoding knowledge in source code effectively buries it there.
Knowledge encoded as DSL models can be executed through interpretation or code generation. It is also independent of the actual execution technology, so porting to another technology is easy. DSLs support analyses relevant to the domain. And while a DSL is not as trivially approachable as prose, a good one can be very much learned and used by non-programmers. Simulators, and other ways of bringing the knowledge to life directly in the DSL IDE also help a lot.
But can’t I just encode my domain knowledge with an existing modeling language? There is a long tradition of “analysis modeling”. But you absolutely need a well-defined language, otherwise the semantics are unclear and you can’t execute. And using UML, for example, in a way that is precise enough for execution, is cumbersome. Every domain, has its own jargon, its own conventions, and often its own notations. You don’t want to encode it in jargon-free English. Similarly for models: you want a language that fits the things you want to express.
In addition, the process of building the language is itself helps you understand the jargon, conventions and notations for the domain, and it forces you to nail down a precise meaning. In some sense, the DSL definition is meta-knowledge, knowledge that is relevant to the whole of your domain. In fact, it can be seen as the authoritative definition of your domain. Don’t risk that benefit by trying to shoehorn your knowledge into a semi-formal and imprecise general-purpose modeling language.
For Voluntis, we have developed a set of DSLs that captures the knowledge about healthcare diagnostics and therapeutics algorithms. The models can be analyzed, simulated and ultimately directly executed on the mobile phone. The details are described in Chapter 6 of this Paper.