[TransWarp] WCAPI: Requirements for a Python WarpCORE API

Phillip J. Eby pje at telecommunity.com
Tue Apr 9 19:50:53 EDT 2002


Hi all.  This is just something I'm circulating for comment; mostly it is 
just my own musings at this point regarding the future Python WarpCORE API 
for TransWarp.  The basic idea is that you'll define WarpCORE packages as 
modules that export some set of TW Services, that can be combined together 
under a TransWarp 'DataModel.Database' Service to implement a database 
interface.  This would include support for DDL generation to set up the 
data model in the underlying DB, and support for upgrading existing datamodels.

The framework would be built atop TransWarp's existing tools for generative 
programming, specifically module inheritance, autogeneration of 
metaclasses, and Feature classes.

Comments and questions are welcome.

(Note: In case you're unfamiliar with WarpCORE, it is a pattern for 
implementing object models in relational databases.  It has some 
similarities to the ACS/OpenACS object system, although it was developed 
independently and is more cross-platform while being less ambitious with 
regard to storing metadata in the underlying database.)


Goals:

* CRITICAL: Support PostgreSQL and Sybase.  Ideally, an application written 
to the WarpCORE API's should require *no* source changes to move between 
databases, except to select the correct driver information.

* OPTIONAL: Support Oracle.

* CRITICAL: Support generating standalone DDL scripts; this is important 
for situations where an application user or developer does not have 
administrative control of the DB server and must work through a DBA.

* CRITICAL: Version upgrade support - it must be possible for a package 
designer to include an upgrade process for both DDL and data.  However, it 
is not clear that data upgrades will always be possible using SQL alone, 
which may lead to upgrade complications.  This is probably the riskiest 
area of the design as a whole.

* CRITICAL: support views, tables, indexes, referential integrity 
constraints, and CRUD operations on rows.

* CRITICAL: support "skinny table" attributes for development, migrating to 
"fat table" attributes for production performance, with transparency at the 
API level.

* IMPORTANT: support higher-level concepts such as associations and membership.

* IMPORTANT: The design must balance cross-platform compatibility with 
ease-of-development.  That is, it should be easy to do a cross-platform 
package without having to drop down into platform-specific tools.  *But* it 
should be possible to use module inheritance to take advantage of platform 
features where they provide a signficant advantage over a generic 
implementation.

* IMPORTANT: Be able to distinguish between "public" and "private" contents 
of a package, to prevent improper dependencies.

* IMPORTANT: Support independent DDL packages which can be separately 
installed - and generated DDL must be similarly installable in piecemeal 
fashion.

* OPTIONAL: Support definition of new DB-level types/classes at application 
runtime, provided that DDL modifications are unnecessary.  This is 
inherently do-able since

* OPTIONAL: Dependency checking, including versions.

* IDEA: Provide support for a "newAsOfVersion" or "versionAdded" attribute 
on metadata objects to assist in autogeneration of upgrade scripts.

* IDEA: Use classes to define tables, and Features to implement 
constraints, indexes, columns, etc.  Driver modules would supply the 
necessary meta-level implementations (metaclasses and feature 
bases).  Tradeoff to consider: This statically binds the DB driver into a 
module, which is good for both performance and coding simplicity, *except* 
when you need to talk to more than one driver in the same program, as you 
might when migrating between back ends.  You can of course trivially create 
another binding with module inheritance, it's just that in very large 
applications there may be N modules you need to rebind.  (Perhaps adding 
some kind of "package inheritance" system to TransWarp would address 
this.)  In any case, the multiple bindings case should probably be 
considered a YAGNI since static binding offers compelling advantages in 
implementation simplicity.  Migration of a dataset between back-ends is 
probably a non-trivial, application-specific activity to begin with.


Non-goals:

* SQL source does not need to be portable across drivers; in fact the 
mapping to table names, stored procedure names, etc., do not even need to 
be equivalent from one DB to another.  The Python API's for defining 
metadata, and for reading/manipulating data, are the only things that must 
be identical across backends.

* As implied by the preceding, supplying uniform access to a WarpCORE DB 
from non-Python code is also a non-goal.

* WCAPI is not a fully generic DDL/SQL modelling system; models not based 
on WarpCORE are explicitly out-of-scope, even though it may be necessary to 
create a very-nearly-generic DDL system to support WarpCORE.  In other 
words, it is acceptable to implement the WCAPI system in ways that require 
a WarpCORE kernel to exist in the target database.




More information about the PEAK mailing list