Hello Alessandro,
I think see where you're going. The FBM exchange Ken is talking about exchanges models (meta information), but is not really designed to exchange data for a specific model. The idea of exchange data as fact assertions and retractions is very compelling to me as well because it removes assumptions about implementations (i.e. physical mappings of a conceptual model) at the two endpoints. As long as both parties recognize the same fact types and object types, the data is easily exchanged.
I don't have anything ready for public consumption at this point, but I'm working on an implementation stack for an ongoing consulting project that is based on physical mappings of the conceptual model on both the client and server. The architecture behind the web-based orm viewers (follow links from http://ormsolutions.com) uses a declarative client-side (read: JavaScript) definition of orm, diagram, and orm diagram metamodels that are read by a fact-driven framework. The design of the framework is easy reading (fully navigable models that can be read using normal JavaScript code), and slightly harder writing (all changes are asserted as facts, with the framework controlling the readable storage). While the public viewers are read-only for now, editing capabilities are a fundamental part of the framework. Edits are tracked as state changes (before/after state in terms of facts for modifications, can be played forward or backward for an undo stack), and these state changes can be serialized by the framework into a JSON representation. The server then reads the JSON and translates the fact assertions into database commands.
The eventual goal on this is to enable a rich client to do extended manipulations client side to an ORM-generated client model, then send off an arbitrarily large change request to the server. Each app has one webservice method for writing model changes, with a generated data-access layer to get the data into the database. I'm actively working on a client generator, which will be followed by a mapping from the generated client model to the generated database model. The final (per technology stack) generator will translate the incoming JSON into the database. Obviously, this is a non-trivial amount of work, but I think the benefits are huge once you get there. Basically, you get a client-side representation of your data, a service on the client to track changes and produce internally consistent JSON, a web service to read changes to the client data, and a generated data access layer to write the changes to a generated database. All you need to do this is an .orm file.
While the client-data-to-server scenario can be easily handled, getting sufficient data (and no more) out of the database to show an individual page needs to be handled on a per-page basis. What I envision is a declarative hierarchical query structure (multiple result sets, each building on data from the other) that pulls sufficient data to populate the client model and render a given page. The difference between a derivation rule, subquery, top-level (parameterized) query, and hierarchical query is fairly minimal. I've done a lot of work on derivation rules and subqueries (see https://www.ormfoundation.org/forums/p/1054/3321.aspx#3321 for another discussion on work-in-progress). Combining individual queries into hierarchical queries is a logical next step.
So, while the server-side parts of this are vapor ware at this point, the client framework is relatively mature (except the generator from .orm file to produce the declarations), and I think we're thinking along the same lines for communicating data in terms of elementary facts. I had this type of stack working for a previously employer (same client stack translates changes to JSON, declaratively generated middle layer maps this to a database, and hierarchical queries can be specified on the server to retrieve initial data for a page). ORM Solutions maintained ownership of all of the client work (hence the intact ORM viewers built on the framework), but I don't have any of the server side implementation as it was targeting a proprietary deductive database backend instead of a Microsoft or LAMP stack on a relational database. So, I'm rebuilding something that I've successfully done in the recent past against a (radically) different backend platform, I've seen it working before and am highly motivated to build it up again.
This doesn't give you a current solution directly in NORMA, but it is definitely part of the picture. If you want more details for a specific project we'll probably need to take it off line at this point.
-Matt