[PEAK] Decentralizing functional aggregation in PEAK

Phillip J. Eby pje at telecommunity.com
Fri Oct 29 17:00:18 EDT 2004


There is a certain pattern that recurs throughout PEAK.  Or, maybe I should 
call it an antipattern, because it's something I don't like.

Multiple dimensions of concern tend to accumulate in "lumps" on the same 
classes.  'peak.model' is an especially egregious example.  Model features 
have methods and attributes to support:

    * structural metadata
    * security metadata
    * parsing and formatting syntax
    * CORBA typecode generation
    * code generation
    * validation
    * ordering of features

(Note that it's not so much an issue that you *declare* all this stuff in a 
peak.model class.  After all, that's a really convenient place to do it, if 
that's where you're going to use it.  The issue is more that peak.model has 
code built into its base classes to handle all of these things.)

Similarly, in 'peak.web', the "traversal context" and "interaction policy" 
implementation classes have a wide array of attributes, to satisfy such 
diverse concerns as user management, UI skinning, HTTP request data, and 
views on content objects, all bundled into single classes.

There are two negative consequences to this antipattern.  First, the code 
is complex, and grows more so over time as additional concerns 
surface.  Second, it has limited extensibility, because other developers 
can't add their own concerns as first-class citizens.  For example, if we 
didn't have the syntax/parsing facility, you couldn't add it to PEAK unless 
you were a PEAK developer.

Finally, the antipattern tends to produce code duplication in PEAK itself, 
since each new concern tends to do things that are slightly similar to the 
existing concerns.  (Wait, that's *three* negative consequences.  Chief 
among the negative consequences of this antipattern...)

Anyway.  There are several other areas in PEAK where the basic issue 
recurs.  For example, the 'binding' package dabbles in permissions, 
configuration keys, and so forth.

If we ignore peak.web for the moment, it's possible to view the issue as 
primarily one of metadata at the attribute and class level.  For example, 
an attribute's security permission, or a class' parsing syntax.  I think 
it's possible that we could simply have an annotation interface like 
'IAttributeAnnotation', and have ways to just add annotations to things, like:

     someAttr = binding.Require("xyz", [int, security.Anybody])

Where the 'int' and the 'security.Anybody' each get adapted to 
IAttributeAnnotation, and then are given the attribute name, descriptor, 
and class to play with, so they can stash their metadata away.  Heck, it 
could even work for syntax, such that fmtparse.IRule instances might also 
adapt to IAttributeAnnotation or IClassAnnotation, such that a future 
version of peak.model might have classes like:

     class Thing(model.Element):

         foo = model.Attribute(int, security.Anybody)
         bar = model.Attribute(str)

         model.syntax( foo, '*', bar )

This approach of using class advisors to describe class metadata, and 
annotations to describe attributes, could probably go a very long way to 
separating these concerns at the most basic level.

Of course, a consequence of this type of separation is that the metadata 
itself can't live as ordinary attributes on classes or descriptors any 
more, and doing e.g. 'Thing.foo.fromString(x)' would no longer be an 
option.  Instead, you'd need to have something more like 'fromString(Thing, 
"foo", x)', where 'fromString' was an API function that pulls the necessary 
metadata from the class.  Or, perhaps you'd use something more like 
'IParser(Thing).parse("foo",x)'.

If implemented via StickyAdapter or something similar, then the metadata 
storage could actually be done in the adapter itself, such that attribute 
and class annotations just get put on the adapter.  Interestingly, this 
would address any issue of namespace conflicts on places to put metadata in 
the class dictionary, since the data would effectively be indexed by unique 
protocol objects.

This could also help with the next problem I was going to bring up: 
context-specific metadata overrides.  For example, suppose in a particular 
situation, you need to have a completely different parsing/formatting 
syntax for a type when it's rendered to XML versus being displayed for a 
user?  By simply creating Variation protocols for these different 
circumstances, you can automatically use the default syntax, but have the 
option of declaring a more specific syntax where appropriate.  E.g. I could 
say something like:

    IXMLSyntax = protocols.Variation(ISyntax)
    IUISyntax  = protocols.Variation(ISyntax)

and then use these protocols to do e.g. 'IXMLSyntax(Thing).parse("foo",x)'.

Similarly, this approach could be used to have alternative security 
declarations, relational mappings, etc. for a class in different contexts.

Once we had a facility like this, peak.model code could be cut back to 
focus on structural metadata, code generation, and maybe some 
validation.  Or, it might be possible for peak.model to in some sense "fade 
away", leaving only some loosely coupled concerns.  For example, validation 
could become a kind of metadata, listing constraints for a class.  Maybe 
even the structural metadata (referencedType, referencedEnd, and 
lowerBound/upperBound) could be "just" another kind of metadata, with the 
descriptor just being a placeholder that's subject to adaptation.

The only issue with that last thought, is that you'd need a way to 
parameterize the class to know what context it's "in", so it would know 
what protocol to adapt to.  It seems to make more sense to define the core 
operations of a model object to delegate back to a workspace object that in 
turn uses adapted forms of the class to flesh out any such 
operations.  That is, it simply delegates __get__/__set__/__delete__ 
operations to the workspace for further action.  The workspace adapts the 
target class to a contextual protocol (e.g. 
'self.descrProto(elementClass)', and then executes the operation.  Or, more 
likely, it uses generic functions that have been initialized using 
metadata, so that e.g. constraints can hook into these functions as well.

In the midst of all of this, I'm thinking about also dropping the 
method-exporter stuff, in favor of using sequence types that are 
"observable".  Most of the reasons why the method-exporter system exists, 
are no longer valid, and it's based largely on the same antipattern as this 
post is about.  That is, embedding various concerns directly into the 
implementation of a domain class, rather than separating the concerns, and 
allowing them to be overridden in some contexts.

One interesting side effect of all this is that metadata is just 
metadata.  So, any kind of metadata we create, such as security 
permissions, or types, could in principle be applied to almost any kind of 
attribute, for use by an appropriate subsystem.  So, you could just as 
easily declare a parsing syntax for a binding.Attribute as a 
model.Attribute, and use parsing functions with the corresponding 
class.  This also opens up the possibility of new "areas of concern" being 
implemented for PEAK, without having to hack PEAK to add them.

So what would we need to implement this?  Not a whole lot.  Each individual 
area of concern/kind of metadata needs a few things:

    * A StickyAdapter type that implements the metadata interface for its 
concern
    * A policy for how metadata is "inherited" from base classes
    * An API for populating metadata declared for the current class
    * A way to create a context-specific declarations

Ideally, these mechanisms should be immutable; that is, once the metadata 
is declared for a class, or a class+context, it should be unchanging.  (You 
can always define a new context where the metadata changes.)

Anyway, depending on the concern, there may then need to be 
IAttributeAnnotation objects and class advisor functions, to declare the 
metadata.  Either the advisor or the annotations will create the adapter 
for the type, and will need to do so semi-idempotently.  This probably 
rules out actual use of 'StickyAdapter', which is more for adapting 
instances than classes anyway.  Maybe instead we'll just have a 
'__metadata__' mapping, keyed by protocol, that contains all annotations 
for that destination protocol.  When you adapt to the protocol, it creates 
a "sticky" adapter from the supplied metadata, plus any inherited metadata 
(by adapting the base classes to the same protocol).

There are a few tricks that we might have to play to get this to work 
correctly.  But it should be possible to have some kind of 
'MetadataAdapter' base class that can then be used to create concerns in a 
straightforward way.  And, we could derive some subclasses for common kinds 
of inheritance/override policies.  For example, many kinds of metadata will 
consist simply of a mapping from attribute name to some value, like the 
attribute's type or security permission, where the default is inherited 
from the class' base classes in MRO order.  Other policies will be more 
complex, like the handling of feature sort order in today's 
peak.model.  And still others will involve data that's specific to the 
class, rather than to attributes, such as a class' parsing syntax.

Interestingly, this approach resembles the way views work in peak.web, 
except that you never declare views within a class; they're always declared 
as external metadata.  But the hierarchy of context-specific Variation 
protocols is the same.  What's different here is the idea of having the 
adapters *carry* metadata, rather than simply *being* the metadata, and of 
having inheritance between the adapters.  Views currently are also just an 
adapter from instances of a type or interface to a named protocol, while 
this new idea is mostly about adapting from types themselves, not the 
instances.

This approach also doesn't address the antipattern's existence in peak.web, 
where the the "traversal context" and "interaction policy" implementation 
classes aggregate a wide variety of concerns.  I think I'll have to drill 
into that issue a bit more in a separate post at some point.  There, the 
main issue is that it would be nice to be able to add things like a 
shopping cart or session or other application-specific things to the 
system, without having to hack PEAK or subclass the basic types.  If 
everything could be configuration-driven, it would be mighty nice, 
especially if it allowed most of the currently built-in properties to 
become part of the configuration, too.

Anyway, I think the adaptation-based approach I've described has strong 
promise for decentralizing many of the concerns that are now tightly 
interwoven in peak.model and peak.binding, leading to complex and fragile 
base classes, not to mention very obscure ways of doing certain things, 
like the way the syntax stuff currently works.




More information about the PEAK mailing list