The tool does verify non-zero lengths for Value data text types doesn't it?
I don't think I've ever added any validation for this. You just get a max length string. This will come with formal facet definitions. However, right now, we essentially use two slots for all of the facet data. The display name for this data and whether or not it is serialized is up tot he data type, but we do not do any additional checking on it.
UUIDs would "never" collide
Yes, this does make them special, because auto-counters collide all the time. This is important metadata because making autocounter ids unique across the system is irksome, whereas this happens automatically for UUID.
The other issue I didn't mention with a second IDENTITY column is that SQL (at least, probably others as well) ignores the nullable bit, which means that an optional absorbed identifier ends up mandatory, which causes a silent validation failure if the primary table does not have its own IDENTITY column to clash.
The idea of generation targets being able to impose limitations and additional options the selected data types is interesting. For example, the choice of string storage (UNICODE, etc) is not a conceptual choice, but is a choice as you map to a DBMS. The limitation question, where a generation target imposes a data limitation based, is also interesting because this limitation may need to bubble back up so that other generated artifacts keep the data consistent. I don't think this will be as common as the target-specific properties. I'd still like to keep these under the umbrella of the conceptual data type though instead of adding a second data type.
Wow, you didn't need to do that for me
I'm not quite that nice (check the file dates, I've had this for a while). I would like to know if it works for you, though, as I haven't had other feedback on it.
I'd rather have a data type driven validation warning
I'm not saying we'll never incorporate data inputs into the absorption choices, just that I need more meta data than I currently have to introduce a real fix for the problem. There are actually two stages in the analysis: the first chooses possible absorption paths, and the second decides whether or not to use those choices. So, the final implementation absorption may not be the same in all targets. For example, an automatic form layout would not move absorbable elements to a subform simply because of an auto counter identifier. I won't necessarily hit par with VM in this area, I just don't have the man power to throw at it. I think VM did warn, but they didn't have any way to fix the problem without a change at the conceptual level.
Thanks for your comments,