In the early 1990's whilst looking for
a data modeling tool, I heard about object-role modeling (ORM). Intrigued by the productivity claims, I wanted to know more. So I flew to Seattle to meet Terry Halpin and the team who
were developing InfoModeler for Paul Allen. I was so impressed that I agreed to promote and distribute InfoModeler in Europe.
later, after I had taught many business
analysts, software developers and university professors how to design databases using InfoModeler
and VisioModeler, Terry asked me to co-author a book about Microsoft's new ORM tool called "Visio for Enterprise Architects" (VEA). (it's the blue book in the sidebar). The book was published in 2003 to coincide with Microsoft's VEA product launch.
Terry, together with Robert Meersman, co-organised the first international ORM Workshop which took place in 1994 on Magnetic Island, Australia. Between 2005 and 2013, the ORM Workshop was run in conjunction with Robert Meersman's annual On The Move conference. At the 2005 Workshop, those present agreed by unanimous vote that an ORM Foundation was needed
to promote ORM. This led me to set up this website as a service to the global ORM Community.
ORM 2014 Workshop: This year, the ORM 2014 Workshop was held in the beautiful town of Bolzano, Italy.The two day event started on Monday September 22 at the KRDB Research Centre in the Faculty of Computer Science at the Free University of Bozen-Bolzano.The workshop was kindly hosted by Professor Enrico Franconi and chaired by Professor Terry Halpin.
The ORM workshop was followed by a meeting of the Fact Based Modeling Working Group which was chaired by Serge Valera of the European Space Agency. This group is developing a standard with the intention of submitting it to one or more international standards bodies.
The rest of
this page summarises ORM's context, how it relates to other modeling methods, how
ORM is used, its history and tools and what you can find in this website.
The context of object-role modeling
All too often, software projects greatly
exceed budget and timescale or are cancelled at great cost. People who study
and write about failed IT projects such as journalist Tony Collins often cite "vague requirements" as a main cause of project failure.
So you might conclude that there would be
fewer failed projects if each project started with a clear and agreed set of
requirements that defined "what" the project had to deliver before investing in
a project to implement the "how".
In 1975, Fred Brooks said
"Because our ideas are faulty, we have bugs."
(The Mythical Man Month). Twenty five years later, after extensive
research into the causes of IT project failures, Tony Collins suggested that every project should be split into two separate and independent contracts:
The first contract should define requirements and the second contract should develop
software and systems that conform to the requirements.
But projects still keep failing which suggests
that the "requirements lesson" learned in the 1950's seems to have been forgotten,
disparaged or discarded. So let's consider three possible problem sources: sponsoring
managers, documentation and development methods.
Some sponsors just don't have time to get involved in the
development process. Others, having agreed a project's concepts, benefits and
budget, choose not to get involved and just want the developers to "get on with
it". But whatever the style of management, busy sponsors would benefit by ensuring
that their needs are understood by developers. So what's most efficient way to do
Developers need a clear understanding of what
the sponsors want. But even when requirements documents are prepared, they often contain dozens if
not hundreds of pages of verbose and ambiguous prose. Can this problem be solved?
Over the last 60 years, several methods have been
used to bridge the knowledge gap between sponsors and developers.
In the 1950's, the waterfall method was used
for the SAGE
computer project. Waterfall was also used by NASA's Apollo project which
operated with IMS,
database management system. A hierarchical database is very fast when you
want to find data by navigating the physical hierarchical pointer structure
that was specified by the database designer. But it is much harder to extract data
that is spread across several branches of the hierarchy.
In 1969, Edgar Codd proposed the relational model as a
solution to this problem. The first sentence of his paper says "Future users
of large data banks must be protected from having to know how the data is
organized in the machine". His paper introduced
the concept of normal
form and the related process of normalization
which is used to avoid logical inconsistencies, anomalies and data duplication.
In 1976, Peter Chen proposed the Entity-Relationship
(ER) model for database design. ER remains popular but there are several
different dialects. One problem with ER is that to get rid of anomalies and redundancies, you have to normalize your ER model by using complex, time-consuming
and error prone manual methods such as functional dependency
analysis. So my "elephant in the room" question for ER modelers is: "If it is so difficult to get rid of the anomalies and redundancies in your ER model, why did you put them there in the first place?" I wonder if Edsger Dijkstra was thinking about ER when he said "All unmastered complexity is of our own making!"
In the 1990's, Grady Booch, Ivar Jacobson and
James Rumbaugh developed UML, a method for designing object-oriented
programs. With UML, you define a data structure by using a class diagram. UML is complex and has been criticized as being ambiguous and
inconsistent which makes it hard to design a class structure that does not
contain problems such as data duplication.
Why ORM? An object-role model avoids the need to write long documents in ambiguous
natural language prose. It's easy for non-technical sponsors to validate an object-role model because ORM tools can generate easy-to-understand sentences. After an object-role model has been validated by non-technical domain experts, the model can be used to generate a class model or a fully normalized database schema.
How do you make an object-role model? An ORM analyst guides sponsors to express their ideas using simple
sentences such as "The person called Fred was born on 15th July
1985." and "The person called Mary was born on 10th July 1990". In
ORM we call such sentences "facts". The analyst
then looks for patterns in the facts. These two facts fit the pattern "Person
was born on BirthDate". We call such patterns "fact types".
Sometimes restrictions must be placed on the
facts. For example in the year 2014, you can't have someone with a birth date
that is in the year 2015 or later. So you use ORM to put a constraint on the
allowable values of "BirthDate". For example: "The value of BirthDate cannot be greater
than today's date."
The analyst creates the object-role model by adding
each new fact type and constraint into the object-role model. The analyst uses an ORM tool to generate easy-to-read
sentences that show the fact types and constraints that have been defined within
the object-role model.
This makes it
easy for sponsors to check that the model accurately represents their ideas.
The sponsor just agrees or disagrees with the output generated by the ORM tool.
The cycle of input and validation continues until the model is considered complete.
The analyst then generates a data structure against which developers can
write software. The data structure can be a class model or a relational database schema.
When generating a relational schema, the ORM tool automatically generates a fully normalized
schema which avoids the considerable effort required for manual normalization that is needed in the ER method. Thus,
ORM helps to reduce development effort and the risk of making costly design errors.
What is the history of ORM? ORM has
evolved from European research into semantic modeling during the 1970's. There
were many contributors and this paragraph only mentions a few. In 1973, the IBM
Systems Journal published a paper by Michael Senko about "Data Structuring".
In 1974 Jean-Raymond Abrial contributed an article about "Data
Semantics" and in June 1975, Eckhard Falkenberg published his doctoral
thesis. In 1976, Falkenberg used the term "object-role model" in one
of his papers. Later, Sjir Nijssen introduced the "circle-box"
notation together with an early version of the conceptual schema design
procedure. Robert Meersman added subtyping, and a conceptual query language.
In 1989, Terry Halpin formalized ORM in his PhD thesis. In the same year, Terry and Sjir Nijssen co-authored the book
"Conceptual Schema and Relational Database Design".
ORM Tools: Early ORM
tools such as IAST and RIDL* (Control Data) were followed by InfoDesigner
(ServerWare), InfoModeler (Asymetrix) and VisioModeler (Visio Corporation). In
2000, Microsoft bought the Visio Corporation and improved VisioModeler. In 2003, Microsoft published its first ORM
implementation as a component of Visual Studio for Enterprise Architects called
"Microsoft Visio for Enterprise Architects" (VEA).
Microsoft retained VEA in the high-end version
of Visual Studio 2005 but then discontinued its ORM project. The book "Database
Modeling with Microsoft Visio for Enterprise Architects" (see sidebar) contains a comprehensive guide to VEA
Halpin and Matt Curland have been developing an ORM tool called "Natural ORM
Architect for Visual Studio" (NORMA). You can download the latest release of NORMA from the Library.
How can I learn more? The
definitive book on ORM is "Information Modeling and Relational Databases -
Second Edition". You can order the book by clicking on the image in the sidebar.
The research page describes the scientific experiment that I used to support my MSc dissertation
in 2008. I designed the experiment to test the hypothesis that "ORM based methods require at least 25% less effort than alternative
methods such as UML and ER."
The Forum contains over
3,000 posts of ORM related discussions. The Library contains ORM software, ORM tutorials and over one hundred ORM presentations
of peer reviewed scientific papers given by ORM researchers at the annual ORM
You can browse
the Library and Forum and registered
members can download
documents and participate in the forum discussions.
During my recent visit to the KRDB Research Centre in Bolzano, Italy, Enrico Franconi introduced me to a book called "The Description Logic Handbook. After a little more research, I found another book called "Description Logic Framework to Synchronize ORM Model and OWL Language" - published in 2012. I hope you find these books useful...
Hi Karl, I have some minor changes to make before changing the dll version and officially releasing. In the mean time, http://ormsolutions.com/NORMAPreview is kept up to date with builds from the latest sources and will be very close to the next official build. -Matt
Hi Karl, Earlier this week when we met in Bolzano, Matt told me that there is not much work left to do on the next release so he would get it out "pretty soon". However, he is working flat out on some urgent projects so "pretty soon" for him is probably "not soon enough" for the rest of us. Ken
Hi, I don't want to sound impatient but I was wondering when the next update to NORMA will be available Regards, Karl