in

The ORM Foundation

Get the facts!

Natural human input for model development?

Last post Fri, Oct 29 2010 3:01 by Anonymous. 14 replies.
Page 1 of 1 (15 items)
Sort Posts: Previous Next
  • Fri, Oct 22 2010 22:13

    • JParrish
    • Top 25 Contributor
      Male
    • Joined on Fri, May 30 2008
    • Florida, USA
    • Posts 25

    Natural human input for model development?

     This is nothing more than a thought, aloud, to see what others may have also contemplated. The origin of this thought, is that during requirements gatherings in every enterprise or small office I've encountered, the only truly common theme is that individuals will quickly opt out of verbose language jarring into augmenting the discussion with visual models. These are most often informal ( concept ) --- relation --> ( concept ) but in one way or another, ER, UML, Mind Map, etc. people seem to prefer to use their hands and scribbling apparatus.

     So to narrow the focus to ORM, and the means by which we construct an ORM model, consider for a moment: In one of these meetings, with a tablet PC in hand, would it be more efficient, or more "natural" to scribble the geometry that we know to represent ORM2.. and have a tool that could adaptively recognize the elements of the model? That is to say, as you draw, you aren't concerned with the validation of the model so much as the validation of the geometry (can the system accurately recognize the various shapes and intersections in a way that taxanomically add value)?

    I've experimented a tiny bit now with Microsofts "Ink" libraries, but before even seriously considering trying to write a recognizer for simple ORM2 geometry, I was curious how useful something like this is.

     In my mind, should something like this work nearly flawlessly, and benefit from hardware accelerated panning and zooming, it would be a very natural and viable alternative to structured input.

  • Sat, Oct 23 2010 16:19 In reply to

    • Ken Evans
    • Top 10 Contributor
      Male
    • Joined on Sun, Nov 18 2007
    • Stickford, UK
    • Posts 805

    Re: Natural human input for model development?

    John,

    Interesting idea. 
    Maybe I'm just a creature of habit, but I can't see how such a tool would fit into the way I make models.
    It takes me quite a while to get the facts right and being able to draw ORM diagrams quickly is not one of my problems.

    I almost always use the "Conceptual System Design Procedure" (CSDP) as described in the BBB.
    In short, this is about getting information from domain experts, entering facts into the ORM tool and then letting NORMA or VEA draw the diagrams for me.
    I use the verbalizer to make sure that what I have put in is exactly what I want to represent in the model.

    As another example, some ORM analysts interact with their domain expert clients solely with verbalised facts.
    They rarely use ORM diagrams with clients.

    So as I see it, the ORM analyst's "problem" is not about drawing diagrams, it is about defining facts and getting the domain experts to agree or disagree.

    So I can't see the value of letting "someone who does not understand what ORM symbols mean" loose with a "cool" tablet that allows them to draw ORM diagrams. It is easier to just enter the facts in text form. (at least for me).
    The "natural flow of analysis" is from examples to facts and then to a compact diagram.
    Bear in mind that the ORM2 diagrams are just a compact way of representing facts and their interrelationships.

    There are a number of people who are trying to use "structured English" to capture facts. But for me, such an approach just gets in the way of creating a "good" object-role model because you have to validate your facts at some stage so why not do it right at the start? 

    So if your suggested "solution" does not meet either the needs of an ORM analyst or the needs of a non-orm domain expert, then who might use it? And why? What is the problem that you see that your proposal solves? 

    Of course, whilst my opinion is based on almost 20 years of using ORM, it may also be true that I have just formed thinking habits that are hard to see past. So keep the ideas coming.

    Ken 

  • Sun, Oct 24 2010 4:24 In reply to

    • JParrish
    • Top 25 Contributor
      Male
    • Joined on Fri, May 30 2008
    • Florida, USA
    • Posts 25

    Re: Natural human input for model development?

     Ken that view makes total sense, and it is certainly the kind of input I was wondering about. I will say that I as well don't at current visually create ORM models, part of that being because it does make the most sense to use fact entry as a way of expressing the concerns of the domain. I also feel that the CSDP is a good process and try to follow it.

     There are though a group of people that respond better to visual representations of data, and certainly ORM has taken this into consideration since there is a visual notation. I didn't mean to convey that a domain expert would be utilizing such a tool to draw parts of the domain. We serve as the interpreter to the domain expert, and in most cases the verbalization and fact checking is the ideal means by which you interact and develop the model with the domain expert, but I do work visually, and if it were afforded to me, I would use something like I proposed so long as it worked well.

  • Thu, Oct 28 2010 12:01 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    When we were first developing NORMA at Neumont, tablet PCs were just starting to come about. A drawing surface was definitely on the "wouldn't that be awesome?" list, but it was pretty low on the priorities. I tend to model subsets of the domain in sequence rather than going through the entire CSDP once for the whole schema. While I've started using the Fact Editor more and more for data entry, I still think a gesture-based UI for modeling would be very compelling.

    Just while I've been writing this, I've been picturing a surface where you put a finger on an empty spot of the diagram to begin a new fact type, then use another finger to drag object types into new roles. It's not even necessary to do shape recognition, really... just have various modes where your selection and drag operations do different things depending on the context. The tool could create the shapes for you without going to all the work of actually drawing a circle or rectangle with your finger.

  • Thu, Oct 28 2010 13:51 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    Ken,

    You're absolutely right that modeling specialists are going to use the most efficient methods possible to create their schema. The diagram really is a side-effect of the underlying logic, and outside of our "club" the pictures do nothing to help people understand the domain.

    That being said...

    I see great value in candy-coated "toys" that lets people experiment with ORM in a game-like fashion. It wouldn't solve the problem of "how do I create a model more efficiently", but it would do great things for the "how do we get more people to use ORM" situation. This is not without its danger, of course... in the programming world, PHP developers tend to be looked down on because they're generally less-educated hackers. It's not to say PHP as a language can't be done well; it's that since PHP is so accessible, it attracts everyone who is kept out by the higher entry barriers of other platforms.

    While we don't want ORM to be the PHP of the modeling world, I would argue that we're in danger (at least in my limited social sphere) of being such a small esoteric minority that we border on irrelevance. Sometimes when I'm evangelizing conceptual modeling, the hacker crowd makes me feel a bit like Darth Vader being chewed out by the Admiral at the beginning of Star Wars: "Don't try to frighten us with your sorcerous ways, Lord Vader. Your sad devotion to that ancient religion has not helped you conjure up the stolen data tapes, or given you clairvoyance enough to find the rebels' hidden fortress..."

    Mind you, Vader went on to prove his point.

    Still, the very nature of ORM is to make predicate logic easier to understand and to help us better see how aspects of the domain interrelate. In my mind, our focus on tools for productivity ignores the very large task of helping bring conceptual modeling into the mainstream. If the barrier to entry is high, we get a small number of highly educated adherents. We have a great community of experienced professionals who do brilliant work. But for all that brilliance, there's a finite amount of time in the day to build tools and solve the world's problems. We simply need more people!

    Along these lines, I've had an idea for a game that teaches basic ORM symbols using colored shapes as your object instances and letting the player go out and gather the shapes as they drop from the sky. The more facts they can populate, the higher their score. Violate a constraint, and it flashes which constraint is violated before penalizing you. In design discussions, I've had some very skeptical students ("ORM could NEVER be fun!!") begging to know when I'd have a prototype for them to play. Even my little sister-in-law wanted to play just based on the idea, and she's as illogical a creature as ever there was.

    The game itself isn't modeling. It's just a simple logic game that happens to have ORM symbols. As the game expands and grows, it could eventually show people at the end that they've created and populated a database. It's my subversive attempt at getting people interested in computer science who would otherwise be turned off at the "geek" label. If it happens to bring a little more rationality into the non-CS world, it'd be a happy side effect.

  • Thu, Oct 28 2010 16:01 In reply to

    • Ken Evans
    • Top 10 Contributor
      Male
    • Joined on Sun, Nov 18 2007
    • Stickford, UK
    • Posts 805

    Re: Natural human input for model development?

    Hi Tyler,

    Well, I'm glad that you agree with me on "the most efficient way to develop models".

    I agree with you that we need more ORM enabled people.However, my inclination is towards doing something that gives the "personal reward" from the sense of satisfaction of being able to use first-order predicate logic to solve a problem.

    Whilst I like your idea of a game, I feel that we need to avoid the trap of getting people's attention only to loose it when they became bored with the game and turn to something else for amusement. 

    Ken

     

     


  • Thu, Oct 28 2010 18:15 In reply to

    Re: Natural human input for model development?

    Tyler Young:
    I've been picturing a surface where you put a finger on an empty spot of the diagram to begin a new fact type
    Ummm. That's what APRIMO is (becoming). Almost exactly that. Want to help?
  • Fri, Oct 29 2010 0:45 In reply to

    • JParrish
    • Top 25 Contributor
      Male
    • Joined on Fri, May 30 2008
    • Florida, USA
    • Posts 25

    Re: Natural human input for model development?

    Tyler Young:

    When we were first developing NORMA at Neumont, tablet PCs were just starting to come about. A drawing surface was definitely on the "wouldn't that be awesome?" list, but it was pretty low on the priorities. I tend to model subsets of the domain in sequence rather than going through the entire CSDP once for the whole schema. While I've started using the Fact Editor more and more for data entry, I still think a gesture-based UI for modeling would be very compelling.

    Tyler, glad to hear that the basic concept of tactile input was at least considered. It is a fairly compelling thought, but I know of no tools, besides perhaps basic diagramming in Visio that really handle it well (and it does handle it incredibly well)

    Tyler Young:

    Just while I've been writing this, I've been picturing a surface where you put a finger on an empty spot of the diagram to begin a new fact type, then use another finger to drag object types into new roles. It's not even necessary to do shape recognition, really... just have various modes where your selection and drag operations do different things depending on the context. The tool could create the shapes for you without going to all the work of actually drawing a circle or rectangle with your finger.

     

    Your last thought seems more in line with a concept I can best define as rapid visual ORM modeling, or perhaps "detached" modeling in the sense that you are throwing provisional shapes onto a surface which you intend to return to in order to refine the model. One reason I think a gesture based interface is not as powerful as something that can interpret complex geometry has to do with the switching between input devices. The idea with tactile input, is to create seamless interaction with the user (by way of touch) .. and in my mind you must at least label the shapes that are being thrown onto a model surface, so without forcing the user to return to the keyboard, you need something like "ink" recognition that is exceedingly good at recognizing human scribble so that you can do some basic text entry. With that you inherit the ability to recognize more advanced and ORM specific geometry.

    Otherwise, for the purpose of just doing this rapid visual ORM modeling, keyboard accelerators along with the mouse can be used to really make something like NORMA fly for an adept user. To take an example in the ORM2 glossary, B is a subtype of A (primary subtype) and C (secondary subtype). This could easily be accelerated with selection of B, holding [ctrl + shift + "S"] ---> mouse click A ---> mouse click C ---> release.

  • Fri, Oct 29 2010 1:09 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    Clifford, I'd love to spin some cycles on it. Alas, my personal development time is booked solid for the next few months. Still, there are nights when I want a new distraction. What system requirements are there for development? I've been shying away from ActiveFacts because of an irrational fear of learning Ruby-- would that prevent me contributing to APRIMO? What's the client-side framework? I'm most comfortable with jQuery, but I find the javascript libraries to be comparatively easy to pick up.
  • Fri, Oct 29 2010 1:51 In reply to

    Re: Natural human input for model development?

    Tyler Young:
    an irrational fear of learning Ruby
    Yes, Ruby is required for the server-side. I think you'd find it easy to learn, and doing so would probably make you a better programmer. It did for me anyhow - and restored some of the joy I'd lost. But the next tasks available on the server-side are fairly intense in their own right, without considering the language issue.
    Tyler Young:
    What's the client-side framework?

    JQuery and Raphael, and soon I'll employ jQTouch or build a gesturing library. APRIMO doesn't work on the iPad at present because it relies on hover, and also fakes up a text widget (which doesn't invoke the iPad keyboard). I also use the shift key (to change a move-drag into a connect-drag), but that can be replaced by a touch-pad in the bottom corner, for touch-devices.

    I don't plan to put modeling logic into the client. Instead, each action will be asynchronously sent (with client-side queued AJAX calls) as JSON to the server, where the undo/redo history will be resolved, merged and distributed between possibly more than one browser (using long-polling). This fully-asynchronous messaging framework is next on the agenda for me, and could be the subject of a separate development effort. Browsers will often only make two independent connections to a website, so a single long-polling connection must handle receiving asynchronous notifications, while a second is available to send data (or else the long-poll can be aborted and the transmission done on a fresh connection). The key thing is to avoid loss of message packets, by numbering them in each direction so a request/response may contain multiple catch-up messages. Does that sound like an interesting Javascript challenge to you?

    I have a choice of server technologies, but Ruby's async_sinatra is one I've played with. The server needs to be Ruby-based to get to my other APIs (otherwise I'd probably use node.js), and Sinatra is much simpler than the full Rails stack. In either case I plan to store the models into an PGSQL database using DataMapper (my own persistence framework isn't ready) and there's some additional modeling needed for that beyond my current metamodel. I can't do long-polling on my current hosting, so that has to change too. The APRIMO server will be publicly accessible online (with free subscriptions to code contributors), but the code is copyright, available only as an enterprise server product.

  • Fri, Oct 29 2010 1:57 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    Have you ever used Photoshop or Illustrator? The Adobe products tend to have very refined user interfaces that allow you to do any action through menus, and any common action through a combination of keyboard shortcuts and input gestures. The gesture for creating multi-column role sequences was one of the more interesting discussions we had in the early days of NORMA.

    Having "modes" for performing different tasks can improve the workflow, but it's tricky to balance productivity for a specific task (rearranging the layout of the diagram, let's say) and getting the user lost in a labyrinth of contexts where the same gesture means different things depending on the context. Adobe does a reasonable job of this through changing cursor icons, visual cues on the drawing surface itself, and letting basic users do things through menus while giving advanced users keyboard shortcuts to dance between activities.

    I still like the idea of ink, but looking around at the touch-screen devices I see people carrying, ink just isn't being used for textual input. The trend seems to be toward on-screen virtual keyboards or small physical keyboards attached to the device. The stylus as an input device seems to be phasing out, and people don't write with their big fat index fingers when they want to be productive.

    My opinion is based on the prevalence of mobile devices with multi-touch screens-- the only things I have are my work-issued laptop and a phone whose sole feature is actually placing phone calls. What's your experience with tablets and such?

  • Fri, Oct 29 2010 2:17 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    Clifford,

    I've been curious about using Raphael as the platform for my game idea. Sounds like when I get a little more experience with that, I could have some useful skills to contribute to the APRIMO UI.

    I've seen code for handling long-polling, and it's usually hackish and potentially dangerous. Rather than long-polling, have you looked into HTML 5 web sockets? It's not supported by all clients yet, but it's very compelling. Some of my students did a small project using web sockets, and the responsiveness was impressive. There's still some error handling to do, but you could serialize commands to a browser database (another HTML 5 feature that's being discussed) and sync with the server when it gets online again.

    Something like that will obviously never be as responsive as having client-side modeling. But wow, it's still a powerful idea! The potential for remote collaborative modeling is really interesting to me.

  • Fri, Oct 29 2010 2:36 In reply to

    • JParrish
    • Top 25 Contributor
      Male
    • Joined on Fri, May 30 2008
    • Florida, USA
    • Posts 25

    Re: Natural human input for model development?

    I started in graphic design work, a difference I suspect I have with many in the ORM community. So I certainly understand your parallels to the Adobe products. I would reckon that my usage of accelerators + gestures comes from my experience with products like Maya 3D.. talk about complex models (inside joke?)

    To be honest I just now tried a true hybrid approach of using the fact editor along with the visual surface and tools, in an attempt at speed, and that was a fairly good experience. I threw three entity types on, along with a ternary fact type, connected the three entity types to the predicate holes, and then using the fact editor with Entity1, 2, and 3 reading already in place I typed a quick reading.. and that worked pretty well.

    Simple things like drawing with the role connector while having the role connector tool "de-focused" after each connection (having to re-select this tool on each connection) leads to a frustrating visual experience, but I say that with absolute respect for the work that has been done.

     

  • Fri, Oct 29 2010 2:58 In reply to

    • Tyler Young
    • Top 10 Contributor
      Male
    • Joined on Thu, Aug 27 2009
    • South Jordan, Utah, USA
    • Posts 49

    Re: Natural human input for model development?

    Aaah, the graphics background makes total sense. I've never worked with Maya, but I think we're on the same page.

    Glad to hear the drawing experience was good for you. The visual diagramming tools were the first ones developed, since the Fact Editor was a fairly complex piece of work that was going to take a while to get done. Once the Fact Editor was in place, the mouse gestures kind of took a back seat. Frankly, as an "expert user" of NORMA, I rarely use the mouse-driven modeling tools to create my fact types.

    For the role connectors, have you tried just dragging from the center of a role to the Object Type? You don't actually have to have the role connector tool enabled to make that gesture work.

  • Fri, Oct 29 2010 3:01 In reply to

    Re: Natural human input for model development?

    Tyler Young:
    Have you ever used Photoshop or Illustrator

    No, but I co-founded a successful startup building a user interface management system, which absorbed me for a decade, so I'm pretty familiar with the options.

    APRIMO avoids menus almost entirely, using click, shift-click, double-click, drag and *shift-drag* for almost all functions. And the keyboard, of course. It's much quicker to sketch with than NORMA - three to five times faster in my experience. Some minor changes will be needed to support multi-touch (tap to select before you can drag, and currently some keystrokes are directed using hover, for example), but I don't anticipate any difficulty in supporting both those and mouse-driven devices equally well. Currently it's lacking in the ORM semantics that will come from the server-side, but you can still produce almost any diagram that NORMA can.

    Re long-polling, that's a fallback - obviously I'd prefer to use websockets on the few browsers that implement them. socket.io supports all modes with fallback, and a friend has a prototype Ruby backend for it. The other point is that the graphic changes are made in real-time on the client (so fast), and get confirmed, modified or rejected by the server afterwards. I think it'll feel pretty much like a client-side app.

Page 1 of 1 (15 items)
© 2008-2024 ------- Terms of Service