Terry's suggest to see what is currently checked is a good idea. That list shows that we validate mandatory constraints (simple and disjunctive) and uniqueneness constraints, although this is somewhat misleading because only the single-column uniqueness is validated right now.
Obviously, we would like to do as much constraint evaluation as possible in the tool. What is less clear is how much we can keep reasonably performant as data sets grow in size. The tool is designed primarily to hold meta-data about the model, not be a repository for the data itself. It might make more sense architecturally to generate a sample repository with enforced constraints external to the model and allow the repository itself to do the data validation. However, the full implementation of this is unfortunately a long ways off as there are a number of other higher priority items before it.
We are always interested in how you are using our tool, so if you could share more about your scenarios--such as why you need so much sample data--that might be very useful information.