[TransWarp] Tips on using model, storage, config, and naming (was Re: First attempt to use the storage package in PEAK)

Phillip J. Eby pje at telecommunity.com
Sat Dec 28 16:10:02 EST 2002


At 04:46 AM 12/25/02 +0200, Roch'e Compaan wrote:

>I suspect that I will want what 'peak.model' provides in the long run
>although I haven't spent enough time with it to appreciate what it can
>do for me now. One requirement I have for my domain classes is that
>properties on them are easily discoverable which should help a lot when
>writing validators to validate user input etc. It seems that subclassing
>attributes from model.Field can help with this.

Yes.  At some point, model.Classifier and its subclasses will have a class 
attribute, _mdl_Features (or something like that) that lists the feature 
objects.  There will be a variety of such class attributes that list what 
instance features exist, within different categories of features.

I'm actually working on the validation framework for 'peak.model' right 
now; the main issues I'm running into are in dealing with multi-object 
constraints and combinatorial validation issues.  For example, where one 
says that every invoice line item for a product must meet such-and-such 
criteria.  It seems necessary to support incremental validation for such 
circumstances, but in order to do that, it's necessary to know what the 
increments are.

Ideally, if one could specify constraints in "relatively declarative" form, 
it would make validating constraints that much easier.  Actually, I kind of 
wonder if maybe I shouldn't be looking into OCL, because then one could put 
that into the UML model for an application from the start.


> > Also, 'peak.model' has a significant refactoring pending, and it's not 
> well
> > documented.  For the time being, you may not want to bother with it.  But
> > if you want examples of its use, look at:
> >
> > peak.metamodels.MOF131,
> > peak.metamodels.uml.MetaModel, and
> > peak.metamodels.uml.Model
> >
> > More or less in that order, with the caveat that MOF131 is untested and
> > might contain errors, or that it might have code that depends on
> > refactorings in 'peak.model' that haven't actually been done yet.  :)
>
>For now I am happy with what peak.storage and peak.naming has given me
>but I am very curious to see how PEAK can convert a UML model to code and
>vica verca.

It will be by translating from the UML model to a MOF model, and then 
generating code for the MOF model.  MOF is a much simpler modelling 
language than UML, that allows for more direct translation to 
implementation code.  UML itself is specified in terms of the MOF, and so 
are other modelling languages such as the CWM (Common Warehousing 
Metamodel), so this allows us to "close the loop", so to speak.  New 
versions of UML and CWM are published in XMI form, based on the MOF 
metamodel.  So, as soon as PEAK supports the MOF metamodel, and can 
generate Python from a MOF model, we can generate code like what's in 
'peak.metamodels.uml.MetaModel' directly from the OMG specifications for UML.

That's the theory, anyway.  In practice, they keep changing MOF about as 
often as they change UML, so it's not all as automatable as one might like.


> > >class ContactDM(storage.EntityDM):
> > >
> > >    defaultClass = Contact
> > >
> > >    attrs = ['Name', 'Surname', 'HomePhone', 'WorkPhone', 'Email']
> > >
> > >    DBConn = binding.bindTo(storage.ISQLConnection)
> >
> > Replace the line above with:
> >
> >      DBConn = binding.bindTo("mysql://roche:mypasswd@localhost/Contacts")
> >
> > This suffices to instantiate a MySQLConnection.  You also don't need to
> > import MySQLConnection.  This is one of the things that the naming system
> > is for.  :)
>
>Now I can appreciate what peak.naming is for. This is great!

That's only the beginning; ordinarily you wouldn't even use a hardcoded 
address like that, but instead a name like:

     DBConn = binding.bindTo("MyContactsDatabase")

This would be looked up first in the component's parents, and if not found, 
would then be looked up in the default "initial naming context" for the 
component.  (Which is set by the 'peak.naming.initialContextFactory' 
property.)  The idea here is that you'll configure the initial naming 
context to be based on some kind of configuration file or naming service, 
which will then resolve "MyContactsDatabase" to its actual address.  Here's 
another way to do it, without creating a special naming provider, but just 
using the configuration properties system from peak.config:

     DBConn = binding.bindTo("config:MyContacts.Database/")

(Note that the trailing '/' is important for "config:" URLs; it tells the 
"config:" context that you want to get the *value* of the property, rather 
than a context object for the property namespace rooted at that property 
name.  This is actually a simple example of a "composite name"; if the 
object found at 'config:MyContacts.Database' is a naming context, then 
anything after the '/' would be looked up in *that* context.  This allows 
you to bridge across multiple naming services, such that you could have a 
configuration property that specified another naming service, in which you 
looked up another naming service, and so on...)

Next, add this to a file called 'myApp.ini':

[MyContacts]
Database = naming.LinkRef("pgsql://roche:mypasswd@localhost/Contacts")

And then either set the PEAK_CONFIG environment variable to "myApp.ini", 
*or* add this:

PEAK_CONFIG = "myApp.ini"

Near the top of your script, before you make use of any PEAK API calls.

This will cause PEAK to read "myApp.ini" after "peak.ini", to set up or 
change configuration properties.  The example above will create a property 
called 'MyContacts.Database' upon demand.  The value of that property will 
be a symbolic link to the database address.  See:

 >>> from peak.api import *
 >>> PEAK_CONFIG='myApp.ini'
 >>> naming.lookup('config:MyContacts.Database/')
<peak.storage.SQL.PGSQLConnection object at 0x013CBB50>

LinkRef objects are like symbolic links to another name or address.  They 
are treated specially when retrieved from a naming context, assuming they 
are processed by the default object factories.  Naming contexts also 
support a 'lookupLink()' operation that will retrieve a LinkRef itself, 
instead of following it, if one is bound to the corresponding name.  See:

 >>> naming.InitialContext().lookupLink('config:MyContacts.Database/')
LinkRef('pgsql://roche:mypasswd@localhost/Contacts')

Also note that LinkRef handling is a function of the naming system; if you 
use the config system directly to lookup the property, you'll just get the 
LinkRef:

 >>> config.getProperty('MyContacts.Database')
LinkRef('pgsql://roche:mypasswd@localhost/Contacts')

And the same would happen if you used 
"binding.bindToProperty('MyContacts.Database')" in your class.  If you 
needed to use getProperty or bindToProperty instead of bindTo("config:"), 
of course, you could get a bit more explicit in your .ini file:

[MyContacts]
Database = naming.lookup(
                "pgsql://roche:mypasswd@localhost/Contacts",
                creationParent=targetObj
            )

This will lookup and create a new database connection, using 'targetObj' 
(the object for which the property was requested) as its parent 
component.  In addition to being more tedious to specify, however, this 
approach will not allow the database connection to know its component name 
within the data manager, because the getProperty() API doesn't pass enough 
information through to the property provider to do it.  99% of the time 
this should be a non-issue, though.  In fact, more like 99.999% of the 
time, because it's unlikely that you'll need to use getProperty() or 
bindToProperty for something like this when bindTo("config:") is more than 
adequate, and as soon as we have a couple of simple DBM- and file-based 
naming providers, you probably won't even bother with "config:".

You'll notice, by the way, that I used 'pgsql' for the examples above 
rather than 'mysql', because I don't have your mysql driver; I assume 
you've written one of your own based on the ones in 
'peak.storage.SQL'.  Note, by the way, that if you write your own driver 
for something associated with a URL scheme, you don't have to place it 
inside of PEAK.  You can put it anywhere.  Just add a line to myApp.ini, 
like so:

[peak.naming.schemes]
myscheme = "some.package.of.mine:MyContextOrAddressClass"

et voila.  As long as PEAK loads your ".ini" file (and you can ensure that 
it does by setting the PEAK_CONFIG environment variable), then URLs 
beginning with "myscheme:" will be usable.

If you want to support both application-level and system-level 
configuration files, just add this to the *top* of your application's .ini 
file:

[Load Settings From]
file = config.getProperty('environ.PEAK_CONFIG', default=None)

And then set PEAK_CONFIG in your main application script to load your 
application's .ini file.  Now when your app runs, PEAK will use the value 
of __main__.PEAK_CONFIG to load the application ini file, which then 
contains the above instructions to load the ini file specified by the 
PEAK_CONFIG environment variable.  The reason you put this at the *top* of 
the file, is so that the system-wide settings are loaded first, then 
overridden by settings in the application config.  Of course, it's possible 
that you might have things you want to set application defaults for, but 
allow the systemwide configuration to override.  These you should place 
before the "[Load Settings From]" section.  (Note that you can have 
multiple "[Load Settings From]" sections, or indeed multiples of *any* kind 
of section.  PEAK effectively doesn't require sections to be contiguous.)

You should also note that only "Load Settings From" and "Provide Utilities" 
sections are processed at *load* time.  All other section types remain 
unexecuted until the desired property is looked up.  This means that it's 
okay to have fairly complex, elaborate, or extensive configuration data in 
your application or sitewide configs; you pay only for section and line 
parsing at load time.  The code in a setting is only eval()'d when the 
property is actually used.  Properties can also be overridden, until they 
are first used.  So configuration lines that are later in a file override 
those that are earlier.  Once a setting has been used, however, any 
subsequent attempt to set it will result in an 'AlreadyRead' 
exception.  This saves you from "dueling settings" where you would swear 
something was set right because function "A" sees it with one setting, but 
function "B" is quite positive that it's set to something else!


>Yes I wanted to know what that '~' is for.  Is this some form of
>operator overloading?  Where in the source does this happen?

It's in 'peak.storage.connections'; the AbstractCursor class defines an 
__invert__ operator which is also the 'justOne()' method.  So, you can also 
say:

self.DBConn('SELECT * ...').justOne()

in order to force single-row retrieval.




More information about the PEAK mailing list