[TransWarp] Configuration stack requirements

Phillip J. Eby pje at telecommunity.com
Wed Jul 3 19:07:55 EDT 2002


At 04:30 PM 7/3/02 -0400, Phillip J. Eby wrote:
>
>In fact, the next issue up for Ty and I is finalizing the plan for the
>configuration stack, which will by default include the ability to set
>configuration items via files, the environment, and variables in __main__
>(i.e. defined in your startup script).  We just haven't finalized the
>precise precedence order and semantics as yet.

* The basic requirement is a read-only mapping interface for looking up a
string and returning a value.

* For a given "configuration", values must be consistent over time.  If you
ask the same "configuration" for a value, you should always get the same
value, no matter how the value was originally obtained by that
configuration.  

* It should be possible to layer or combine configurations to create a
precedence-ordered namespace stack.  (Presumably by addition, to produce a
new immutable configuration object.)

* It should be possible to define a wide variety of configuration sources,
including os.environ, __main__, ConfigParser files, etc.  PEAK users should
be able to define and use their own sources, as long as they implement the
right interfaces.

* Many configuration data sources are text-only in nature.  (E.g.,
'os.environ'.)  There needs to be some sort of schema or type conversion
capability built into the configuration system, so that text can be
converted into numbers, names/URLs, import specifications or lists thereof,
etc.

* Some type conversions (such as looking up a name or processing an import)
are potentially quite expensive, so it should be possible to cache these
results as well.  The naming system itself needs this, as many of its basic
operations do a lot of configuration lookups that then have import
processing.  :(  Ideally, a configuration schema would simply supply a
conversion callable, and the configuration would cache the result of the
conversion.

* Configurations will essentially be immutable in terms of their externally
visible state, but not necessarily so in their internal state, since their
caches will change over time.  (Because we don't want to necessarily slurp
in all the contents of say, an LDAP directory, during construction just to
ensure total immutability!)  However, this does indicate that there are
thread-safety considerations.  There should be some way to put this in the
base classes so that individual configuration providers don't need to worry
about it.  Specifically, non-cache lookups need to be synchronized.
Luckily, the overall structure and semantics mean that there should be no
way to have a deadlock.

Note: the above doesn't mean we consider threading to be an important
feature for us, per se, but some people using PEAK will use threads, and
this is something that should be covered, even if we just mark an X at the
spot where the locking should go, for some volunteer to submit a patch. :)

Integration: the peak.naming stuff will change to use configurations in
place of the "environment" stuff that was carried over from JNDI.  A
"config" URL context will be created that looks things up in its own
configuration stack, which by default will come from a default
configuration stack, whose contents are as yet unknown.  It is possible
that the default configuration stack will include some kind of "meta
configurer" that does an indirect lookup from something specified in the
default configuration.  That is, the top item of the default stack might
look into the rest of the stack for variables that say what should be stuck
in at the top of the stack.  This should be more than enough flexibility
for anyone.  :)

We may wish also to have some association between peak.binding components
and a configuration object which would be used to construct an
InitialContext for doing naming lookups.  Right now,
binding.lookupComponent() uses the default global InitialContext, rather
than one which is "placeful" relative to the component hierarchy.  It may
be tricky to accomplish this cleanly, however.




More information about the PEAK mailing list