Hi All,
Thank you
for you thought on this. It does not help me much in finding the answer, but
possibly to phrase the scope more clearly.
Victor, you
say “It's kind of the 'marketing' take”, which, however, I disagree with,
because answering the questions that I asked is, I think, part of the requirements analysis for conceptual
modelling languages and their CASE (OBDA, and so forth) tools. To be able to
make good, useful, modelling tools, one would need to know which gaps (yes,
plural) it is, or should be, filling.
While there
are the obvious requirements of having a software-supported, graphical and
textual interface for the modelling, and, ideally, some (semi-)automated
translation to the design level and implementation code, there are other
‘add-ons’ (or essential…), such as having the option to go from examples/facts
to the type-level or other guidance with decision diagrams, modelling patterns,
ontology-inspired modelling, and whatnot. But if one does not know why people
do not use all features or abandon conceptual modelling altogether, then any
extra ‘add-on’ to improve the modelling experience is a random shot in the dark
at worst and a bright hunch at best.
Maybe it is
neither the tools nor the modelling methodology. Listing some alternative
options, in random order:
(i)
The
‘average modeller’ has had insufficient training and does not know about all
the features available in the language;
(ii)
Option
(i) + the ‘average modeller’—say, an vocational, industry, or BSc-level trained
person—is, well, him/herself just not intelligent enough to grasp the
complexities of conceptual analysis (implicitly saying it requires at least
MSc-level insights);
(iii)
The
education is up to scratch, so that young graduates actually do have a clue
about modelling and would use a language to the full, but the old guys with
little conceptual modelling experience think they know it better anyway (hence,
it would just be a matter of time until they finally retire);
(iv)
The
subject domain is not that complex at all, so we can do with a more restrictive
language;
(v)
The
subject domain is complex, and one would need most, if not all, of the features
in each conceptual model;
(vi)
The
subject domain is complex, but for the prospective application such details can
safely be omitted without disrupting desired functionality;
(vii)
Option
(vi) but the customers do not realise one can put much more in an application,
hence, in the conceptual model.
a. As a variation: would the onus be on
the modeller to inform the customer, or the customer should think a little
deeper?
(viii)
Option
(iv) or (vi) holds ‘in most cases’ compared (v).
So, lots of
hypotheses that precede marketing by a large margin. I am not an expert in
investigating these matters (imho, it would be crazy if there has not been done
extensive research on such themes).
As for
anecdotal evidence, I have come across (i), (ii), (iii), (v), (vi), and (vii),
leading me to the idea that statistical evidence eventually will have to be
produced to uncover tendencies toward one or the other.
On the
other hand, if we look at (E)ER, UML, and ORM as languages irrespective of the
modelling methodologies, tools and other factors, then up until about two years
ago it was only in the direction of
more features and greater expressiveness, whereas only in the last two years
I’ve come a few papers where it was claimed that ‘in most cases’ we could do
with less features. Invariably, those
papers focussed on online usage of
the conceptual model as opposed to a static thing it has been for most of the
times over the past three decades. In that respect, the well-known debate on
ORM vs. ER/UML has gotten an extra argumentation-dimension, i.e., for going
for, or staying with, ‘simpler’ languages due to computational limitations—but now
there are actually demonstrable advantages
of doing so (conceptual querying, satisfiability checking, among others), which
were not there in the ‘classical’ ORM vs. ER/UML debate.
Best
regards,
Marijke