In the early 1990's whilst looking for
a data modeling tool, I heard about object-role modeling (ORM). Intrigued by the productivity claims, I wanted to know more. So I flew to Seattle to meet Terry Halpin and the team who
were developing InfoModeler for Paul Allen. I was so impressed that I agreed to promote and distribute InfoModeler in Europe.
later, after I had taught many business
analysts, software developers and university professors how to design databases using InfoModeler
and VisioModeler, Terry asked me to co-author a book about Microsoft's new ORM tool called "Visio for Enterprise Architects" (VEA). (it's the blue book in the sidebar). The book was published in 2003 to coincide with Microsoft's product launch.
Terry, together with Robert Meersman, co-organised the first international ORM Workshop which took place in 1994 on Magnetic Island, Australia. Since 2005, the ORM Workshop has been run in conjunction with Robert Meersman's popular annual On The Move conference. At the 2005 Workshop, those present agreed by unanimous vote that an ORM Foundation was needed
to promote ORM. This led me to set up this website which I have maintained ever
The rest of
this page summarises ORM's context, how it relates to other modeling methods, how
ORM is used, its history and tools and what you can find in this website.
The ORM 2014 Workshop was to be held on October 29-31 in Amantea, Southern Italy but since too few papers were received by the OTM conference deadline, the workshop was cancelled.
The ORM 2014 workshop will now be hosted by Professor Enrico Franconi at The Free University of Bozen, Bolzano, Italy on 22 & 23 September 2014.
If you want to give an ORM-related presentation at the workshop you need to submit an abstract to Terry Halpin.
To register on this website, just click on the "Join" button in the top right corner.
The context of object-role modeling
All too often, software projects greatly
exceed budget and timescale or are cancelled at great cost. People who study
and write about failed IT projects such as journalist Tony Collins often cite "vague requirements" as a main cause of project failure.
So you might conclude that there would be
fewer failed projects if each project started with a clear and agreed set of
requirements that defined "what" the project had to deliver before investing in
a project to implement the "how".
In 1975, Fred Brooks said
"Because our ideas are faulty, we have bugs."
(The Mythical Man Month). Twenty five years later, after extensive
research into the causes of IT project failures, Tony Collins suggested that every project should be split into two separate and independent contracts:
The first contract should define requirements and the second contract should develop
software and systems that conform to the requirements.
But projects still keep failing which suggests
that the "requirements lesson" learned in the 1950's seems to have been forgotten,
disparaged or discarded. So let's consider three possible problem sources: sponsoring
managers, documentation and development methods.
Some sponsors just don't have time to get involved in the
development process. Others, having agreed a project's concepts, benefits and
budget, choose not to get involved and just want the developers to "get on with
it". But whatever the style of management, busy sponsors would benefit by ensuring
that their needs are understood by developers. So what's most efficient way to do
Developers need a clear understanding of what
the sponsors want. But even when requirements documents are prepared, they often contain dozens if
not hundreds of pages of verbose and ambiguous prose. Can this problem be solved?
Over the last 60 years, several methods have been
used to bridge the knowledge gap between sponsors and developers.
In the 1950's, the waterfall method was used
for the SAGE
computer project. Waterfall was also used by NASA's Apollo project which
operated with IMS,
database management system. A hierarchical database is very fast when you
want to find data by navigating the physical hierarchical pointer structure
that was specified by the database designer. But it is much harder to extract data
that is spread across several branches of the hierarchy.
In 1969, Edgar Codd proposed the relational model as a
solution to this problem. The first sentence of his paper says "Future users
of large data banks must be protected from having to know how the data is
organized in the machine". His paper introduced
the concept of normal
form and the related process of normalization
which is used to avoid logical inconsistencies, anomalies and data duplication.
In 1976, Peter Chen proposed the Entity-Relationship
(ER) model for database design. ER remains popular but there are several
different dialects. One problem with ER is that to get rid of anomalies and redundancies, you have to normalise your ER model by using complex, time-consuming
and error prone manual methods such as functional dependency
analysis. So my "elephant in the room" question for ER modelers is: "If it is so difficult to get rid of the anomalies and redundancies in your ER model, why did you put them there in the first place?" I wonder if Edsger Dijkstra was thinking about ER when he said "All unmastered complexity is of our own making!"
In the 1990's, Grady Booch, Ivar Jacobson and
James Rumbaugh developed UML, a method for designing object-oriented
programs. With UML, you define a data structure by using a class diagram. UML is complex and has been criticised as being ambiguous and
inconsistent which makes it hard to design a class structure that does not
contain problems such as data duplication.
Why ORM? ORM gives
you an efficient way to define requirements and it minimises the need to use ambiguous
prose. An ORM tool can generate a good class model and provide automatic
How do you make an object-role model? An ORM analyst guides sponsors to express their ideas using simple
sentences such as "The person called Fred was born on 15th July
1985." and "The person called Mary was born on 10th July 1990". In
ORM we call such sentences "facts". The analyst
then looks for patterns in the facts. In this case the facts have the pattern "Person
was born on BirthDate". We call such patterns "fact types".
Sometimes restrictions must be placed on the
facts. For example in the year 2014, you can't have someone with a birth date
that is in the year 2015 or later. So you use ORM to put a constraint on the
allowable values of "BirthDate". For example: "The value of BirthDate cannot be greater
than today's date."
The analyst creates the object-role model by adding
each new fact type and constraint into the object-role model. The analyst uses an ORM tool to generate easy-to-read
sentences that show the fact types and constraints that have been defined within
the object-role model.
This makes it
easy for sponsors to check that the model accurately represents their ideas.
The sponsor just agrees or disagrees with the output generated by the ORM tool.
The cycle of input and validation continues until the model is considered complete.
The analyst then
uses the ORM tool to generate a data structure against which developers can
write software. The data structure can be a class model or a relational database schema.
When generating a relational schema, the ORM tool automatically generates a fully
schema which avoids the considerable effort required for manual normalisation that is needed in the ER method. Thus,
ORM helps to reduce development effort and the risk of making costly design errors.
What is the history of ORM? ORM has
evolved from European research into semantic modeling during the 1970's. There
were many contributors and this paragraph only mentions a few. In 1973, the IBM
Systems Journal published a paper by Michael Senko about "Data Structuring".
In 1974 Jean-Raymond Abrial contributed an article about "Data
Semantics" and in June 1975, Eckhard Falkenberg published his doctoral
thesis. In 1976, Falkenberg used the term "object-role model" in one
of his papers. Later, Sjir Nijssen introduced the "circle-box"
notation together with an early version of the conceptual schema design
procedure. Robert Meersman added subtyping, and a conceptual query language.
In 1989, Terry Halpin formalised ORM in his PhD thesis. In the same year, Terry and Sjir Nijssen co-authored the book
"Conceptual Schema and Relational Database Design".
ORM Tools: Early ORM
tools such as IAST and RIDL* (Control Data) were followed by InfoDesigner
(ServerWare), InfoModeler (Asymetrix) and VisioModeler (Visio Corporation). In
2000, Microsoft bought the Visio Corporation and improved VisioModeler. In 2003, Microsoft published its first ORM
implementation as a component of Visual Studio for Enterprise Architects called
"Microsoft Visio for Enterprise Architects" (VEA).
Microsoft retained VEA in the high-end version
of Visual Studio 2005 but then discontinued its ORM project. The book "Database
Modeling with Microsoft Visio for Enterprise Architects" (see sidebar) contains a comprehensive guide to VEA
Halpin and Matt Curland have been developing an ORM tool called "Natural ORM
Architect for Visual Studio" (NORMA). You can download the latest release of NORMA from the Library.
How can I learn more? The
definitive book on ORM is "Information Modeling and Relational Databases -
Second Edition". You can order the book by clicking on the image in the sidebar.
The research page describes the scientific experiment that I used to support my MSc dissertation
in 2008. I designed the experiment to test the hypothesis that "ORM based methods require at least 25% less effort than alternative
methods such as UML and ER."
The Forum contains over
3,000 posts of ORM related discussions. The Library contains ORM software, ORM tutorials and over one hundred ORM presentations
of peer reviewed scientific papers given by ORM researchers at the annual ORM
You can browse
the Library and Forum and registered
members can download
documents and participate in the forum discussions. Registration is free - just click on the "Join" button
at the top right of the screen and follow the instructions.