From pje at telecommunity.com Sat Jun 4 15:21:03 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] peak.running.commands patch In-Reply-To: <1117298097.1660.106.camel@oneiros> Message-ID: <5.1.1.6.0.20050604152029.02303588@mail.telecommunity.com> At 09:34 AM 5/28/2005 -0700, Dave Peticolas wrote: >Here is a possible patch for peak.running.commands. > >It allows you to use sys.exit() without an argument >and still get the default return value of 0 (the >default specified for sys.exit in the python docs). I've now implemented a similar feature in the CVS version. Thanks for the suggestion. From psucorp at grinchcentral.com Mon Jun 6 09:30:54 2005 From: psucorp at grinchcentral.com (Erik Rose) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Breaking up PEAK In-Reply-To: <5.1.1.6.0.20050529001605.021f8370@mail.telecommunity.com> References: <5.1.1.6.0.20050529001605.021f8370@mail.telecommunity.com> Message-ID: Hi, pje, all. > Here are some preliminary thoughts on how the breakup might go. Just as a data point for you, the packages I'm using directly are binding, naming, config, model, protocols, and storage. > My initial feeling about this list is that it's both too coarse and > too fine-grained at the same time. It's too coarse because for > example both peak.util and peak.running are umbrella packages > containing many things that could quite reasonably be split out > individually. I'd worry less about trying to normalize the structure of PEAK as if it were a DB and more about the interaction with users' brains. I think the breakdown you suggested?3 clumps of about 5 packages each?is good; it fits ESR's definition of compactness (http://www.catb.org/~esr/writings/taoup/html/ ch04s02.html#compactness), and each clump even fits in the 5-plus-or-minus-2 size of human working memory. If you break things down further, I think new users will again descend into panic, saying "I have no idea what this thing does; there are 500 pieces!" > I'm also uneasy because keeping track of versioning and release > information for *fifteen* packages (versus two now) seems a little > overwhelming. That is going to be hard, and there's little way around it. However, it's also one of the most useful things you could do for someone in my position two months ago. I had significant trouble getting PEAK accepted in my department because the latest release said "alpha" on it; to some people, labeling carries a lot of weight. If, by splitting PEAK up a bit, you could label more parts "stable", you'd probably win some conservative users. Btw, I'm sure you've considered this, but if you're going to switch version control systems, now would be a natural time. Maybe one or two of them has some tricks to ease cross-package versioning. > documentation. And, people encountering these small packages don't > run into that "trying to learn PEAK" (as in *all* of it) barrier. Yep, I'm still scaling that barrier myself, but I learned what was important pretty quickly by reading the tutorials and looking at the diagram at http://peak.telecommunity.com/DevCenter. Btw, how up-to-date is that diagram? An interesting thing happens when you publish something?people believe it! :-) > Further, it will be more obvious to people just how much functionality > is available in the PEAK "family of products" The above-mentioned diagram made that clear to me. > -- frankly I myself am amazed whenever I realize that I don't even > know myself how much stuff is in there. There's more than I can keep > track of consciously any more! See the link about compactness, above. :-) Thanks for the opportunity to give input; you're doing a fine job! Erik From rk at dat.cz Mon Jun 6 10:45:43 2005 From: rk at dat.cz (Radek Kanovsky) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Templates with layout Message-ID: <20050606144543.GA30653@dat.cz> Lets have some simple PWT template "repr.pwt":
It correctly shows repr() of the underlying object. Everything is OK untill we try to use some layout: repr.pwt:
layout.pwt:
Then we end in infinite recursion, because current object in repr.pwt is not underlying object but parsed layout.pwt template. Workaround is to use ``content:replace=".."'' in repr.pwt but I am almost sure that this is not intended behaviour. Following patch solve the problem but may have some undesired side effects that I am not aware of now. Index: templates.py =================================================================== --- templates.py (revision 53) +++ templates.py (working copy) @@ -626,7 +626,9 @@ elif path=='/default': return super(TemplateDocument,self) else: - return Replace(self, dataSpec=path, params=self.params.copy()) + wrap = LayoutParamWrapper + params = dict([(p,wrap(v)) for p,v in self.params.iteritems()]) + return Replace(self, dataSpec=path, params=params) if attrName in self.params: return IDOMletRenderable(self.params[attrName]) @@ -638,9 +640,15 @@ fragment = page = binding.Make(layoutDOMlet) +class LayoutParamWrapper(object): + + protocols.advise(instancesProvide=[IDOMletRenderable]) + + def __init__(self, elem): + self.elem = elem + + def renderFor(self, ctx, state): + return self.elem.renderFor(ctx.previous, state) RadekK From pje at telecommunity.com Mon Jun 6 12:02:58 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Templates with layout In-Reply-To: <20050606144543.GA30653@dat.cz> Message-ID: <5.1.1.6.0.20050606111056.0219f228@mail.telecommunity.com> At 04:45 PM 6/6/2005 +0200, Radek Kanovsky wrote: >Lets have some simple PWT template "repr.pwt": > > >
> > >It correctly shows repr() of the underlying object. Everything is OK untill >we try to use some layout: > > repr.pwt: > > >
> > > layout.pwt: > > >
> > >Then we end in infinite recursion, because current object in repr.pwt >is not underlying object but parsed layout.pwt template. Workaround >is to use ``content:replace=".."'' in repr.pwt but I am almost sure >that this is not intended behaviour. Hi Radek. I'm unable to reproduce this behavior. I just checked in a new test that does this using the HTML straight from your examples above. See peak.web.tests.test_resources.IntegrationTests.testLayout. Can you tell me more about how to reproduce this behavior? The system already has code that should make layouts work the way you want them to, so I'm surprised that you're having a problem. Could it be that you are not rendering the top-level page via handle_http, but instead are using a manually-created start context and renderFor()? From rk at dat.cz Tue Jun 7 03:39:18 2005 From: rk at dat.cz (Radek Kanovsky) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Templates with layout In-Reply-To: <5.1.1.6.0.20050606111056.0219f228@mail.telecommunity.com> References: <20050606144543.GA30653@dat.cz> <5.1.1.6.0.20050606111056.0219f228@mail.telecommunity.com> Message-ID: <20050607073918.GL3430@dat.cz> On Mon, Jun 06, 2005 at 12:02:58PM -0400, Phillip J. Eby wrote: > Hi Radek. I'm unable to reproduce this behavior. I just checked in a new > test that does this using the HTML straight from your examples above. See > peak.web.tests.test_resources.IntegrationTests.testLayout. Can you tell me > more about how to reproduce this behavior? The system already has code > that should make layouts work the way you want them to, so I'm surprised > that you're having a problem. > > Could it be that you are not rendering the top-level page via handle_http, > but instead are using a manually-created start context and renderFor()? Sorry for confusion. I saw error clearly in sources but there were not any there. My configuration had modifications in pwt-schema.ini that I didn't notice. Thanks to work on this problem, things are more clear for me. Especially parameter wrapping in templates.Replace with DOMletMethod. It does what I think it doesn't :-) RadekK From pje at telecommunity.com Tue Jun 14 08:25:29 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Breaking up PEAK In-Reply-To: References: <5.1.1.6.0.20050529001605.021f8370@mail.telecommunity.com> <5.1.1.6.0.20050529001605.021f8370@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20050614073941.01df9760@mail.telecommunity.com> At 09:30 AM 6/6/2005 -0400, Erik Rose wrote: >I'd worry less about trying to normalize the structure of PEAK as if it >were a DB and more about the interaction with users' brains. I think >the breakdown you suggested?3 clumps of about 5 packages each?is good; >it fits ESR's definition of compactness >(http://www.catb.org/~esr/writings/taoup/html/ ch04s02.html#compactness), >and each clump even fits in the >5-plus-or-minus-2 size of human working memory. If you break things >down further, I think new users will again descend into panic, saying >"I have no idea what this thing does; there are 500 pieces!" That's just it, though; if things can stand on their own, they can stand on their own. For example, there is an assortment of testing utilities in peak.util: mockdb, mockets, and unittrace. The odds that anybody will do much of anything with them in peak.util is pretty much nil, but if I bundled them as a mini-project (perhaps called "MockTesting"), it'd probably get some interest from TDD-oriented folks. Similarly, nobody's going to install PEAK just to get peak.util.uuid and friends, but as a "PyUUID" package that provides a cross-platform API for the draft UUID/GUID spec, and that's another thing people would download. Neither MockTesting nor PyUUID would have the same audience as the rest of PEAK, and there's absolutely no connotation that you have to learn these packages in order to "learn PEAK". There are many other use-case oriented groupings of PEAK's contents that could be spun off to live as semi-independent packages. The purpose here isn't to "normalize" PEAK, but rather to make its functionality accessible to a wider audience (in the overall sense) by having individual packages be available for narrower audiences. An interesting side effect, by the way, is that it's going to drive some innovations in the PEAK core machinery. For example, if PEAK splits up, I'll really want to have a way for 'peak.ini' to be split across egg boundaries, and ways for eggs to easily trigger registrations of certain functionality when another package is present. For example, peak.security as it stands today could depend solely on today's PyProtocols, if it didn't register certain functions with peak.binding to support attribute metadata. But if I made peak.util.imports into an "ImportTools" package, then I could release an "ACLRules" package, depending on ImportTools and RuleDispatch ('dispatch' from PyProtocols), by using the 'whenImported()' functionality in ImportTools to only register those functions if and when peak.binding.api is imported. On the other hand, the idea of having such intricate inter-package dependencies also sounds pretty scary, considering that PEAK could easily break into a *lot* of separately distributed packages. On the other hand, almost everything I've named above besides RuleDispatch is pretty darn stable already, so it's not like you'll notice most of the time. However, people creating third-party distributions of PEAK are probably going to go through some initial pain as we make the switch. (OTOH, there are probably few people doing that with PEAK as a whole, but many more such people who are interested in doing it for exactly one of PEAK's many packages.) >Btw, I'm sure you've considered this, but if you're going to switch >version control systems, now would be a natural time. Maybe one or two >of them has some tricks to ease cross-package versioning. Yes, going to Subversion would probably help a lot with the reshuffling, as far as retaining directory histories once stuff starts moving around. The main drawback I see is that Subversion doesn't do repository symlinks: when you copy stuff it's effectively a branch. On the other hand, that will certainly encourage making stuff cleanly separable. I've been using Subversion at OSAF for a few weeks now, but it has some quirks that I find seriously annoying. For one thing, the whole "mime types and EOL type get set on the client" thing *really* ticks me off. That should be something you configure on the repository, for heaven's sake. Its HTML notice emails don't play well with pipermail, either, and it's a serious pain to build a server for it. On the plus side, however, there's a really nice Wiki/case tracking system (Trac) that's written in Python and integrates with Subversion, and I could probably go for replacing our current Wiki software with it at some point. Of course, finding hours in the day to do any of this is always a problem. Currently, my spare time is still being filled by working on setuptools and EasyInstall themselves, in order to enable this whole deal. I think the coding tasks could probably be completed in a few more weekends, but documentation, specifications, support, and advocacy for the "eggs" concept and implementation are likely to be ongoing for some time. :( From pje at telecommunity.com Tue Jun 28 01:03:18 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Beginning the breakup Message-ID: <5.1.1.6.0.20050628001731.02e8ab60@mail.telecommunity.com> I've managed to get setuptools to a good enough place to allow automatic dependency installation in a sane way, so I'm starting to plan the first breakups of the PEAK ouevre. I was originally going to convert our CVS repository to subversion before doing any breakups, but I've found through attempts at converting it, that cvs2svn doesn't like the repository symlinks we're using to share stuff between projects. That's a bit of a problem, since we share a *lot* of stuff between projects. So it looks like we'd probably be better off eliminating the sharing *before* even considering moving anything to subversion. I think the first step is going to be to get rid of PEAK's embedded versions of setuptools, by migrating to the current Python CVS sandbox version that I've been developing on. This will be accomplished by adding the 'ez_setup.py' script to each of the projects (PEAK, PyProtocols, wsgiref) that uses it. After that, I'll delete the source of setuptools from the CVS HEAD as a normal revisioned delete. The only tricky bit about that is that there will be a period between some of the commits where an intermediate checkout wouldn't work correctly due to conflicts between ez_setup and setuptools. But at the end everything should be fine and the history intact. The next thing that's shared across a bunch of products is the src/setup/common.py file. However, I only use the stuff that's in it to generate online documentation, and documentation for the "source+doc" distribution variants. Does anybody use it for anything else? I'm actually leaning towards getting rid of the "source+doc" distribution format anyway, in favor of just generating and distributing the documentation separately. But I'm open to feedback on all this. Anyway, my current thought is just to nix src/setup/common.py and update my server-side scripts to run happydoc directly for the online docs. After these two, the first big split will occur: the 'dispatch' package will get its own project distribution, tentatively called "RuleDispatch", with a version of something like 0.5a1. In order to do this, I'm going to have to actually *copy* the subtree in CVS, so I can keep the history sane in both PyProtocols and the new RuleDispatch project. Once I've copied it, I'll of course delete it from the PEAK+PyProtocols HEAD. RuleDispatch will require a 1.0a version of PyProtocols, but PyProtocols won't depend on anything but setuptools. PEAK will depend on RuleDispatch. The next split would then be to copy wsgiref, protocols, and fcgiapp to their respective projects from PEAK (instead of continuing to share them via symlinks), and then delete them from PEAK's HEAD, adding the requisite dependencies. I'd also delete ZConfig, and add some optional dependency settings to indicate which PEAK tools need ZConfig installed. At that point, it should be pretty easy to get a full install of PEAK using EasyInstall or just setup.py install. For development purposes, it's going to be harder to work on something that involves simultaneous changes to multiple projects, but it's not that often that I do that anyway. In fact, the current situation sometimes causes me to forget to do an update to download into PEAK something I changed in PyProtocols. So, it'll probably work out just fine to keep things separate. After these initial splits, I can start looking at others, but for right now I'm thinking that RuleDispatch, PyProtocols, PEAK, wsgiref, and fcgiapp are plenty for us to be distributing at the outset. I think I'll also need to set up some sort of "daily build" cronjobs for all the projects so that it's easy to use EasyInstall to get development versions of stuff, and development versions can rely on a particular datestamp. Anyway, this is all just a bunch of random musings on how it'll all work. Any thoughts, input, or questions are appreciated. From pje at telecommunity.com Tue Jun 28 23:50:00 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Moving to Subversion Message-ID: <5.1.1.6.0.20050628211324.0253c428@mail.telecommunity.com> After more investigation, it seems like it's going to be easier to just move to Subversion than to do all the CVS repository munging I described last night. So much easier, in fact, that I've already done a prototype migration: ViewCVS: http://svn.eby-sarna.com/ Anonymous SVN: svn://svn.eby-sarna.com/svnroot/wsgiref svn://svn.eby-sarna.com/svnroot/PEAK svn://svn.eby-sarna.com/svnroot/PyProtocols If you have a login to the box (i.e., if you're Ty ;) ), you can use svn+ssh: URLs instead of svn: URLs to get write access, although you may have to modify your .bashrc to set a path that includes svnserve. (I had to.) I don't have commit messages implemented yet, although if you're on the source-changes mailing list you might think otherwise with all the test mails flying by. Those are actually being generated by svn2cvs, which is a reverse migration script that copies Subversion changes back into CVS. This will be what we'll use as our safety line in case we have to abort the migration and don't want to redo commits. It also means that I won't have to write a script to send commit messages right away, because the CVS commit scripts take care of that, although they generate links to the CVS revisions, not the SVN ones. But, I'll want to switch that over to Subversion-specific commit messages pretty soon. Finally, it also means that I'll be able to leave all the current CVS infrastructure in place for people who aren't quite ready to make the switch. I haven't decided yet when to actually "flip the switch" and begin using SVN officially. My migration scripts seem to be in order, and everything seems to work, so I might do it as soon as tomorrow evening, or as late as the weekend depending on how things go. Things have been mostly going so well that I might be tempted to add more features, like maybe play with Trac (combination wiki/case tracker) and see whether I want to set ones up for each of the projects that's getting split off. On the other hand, I'm really anxious to start breaking the really "external" stuff out of PEAK (wsgiref, protocols, dispatch, fcgiapp, ZConfig, etc.) so I might want to hold off on getting fancy, except maybe to check whether Trac makes any assumptions about your repository layout (e.g. that branches/trunk/tags crap) in which case I might need to rethink my current plan of matching our CVS tree exactly and adding _BRANCHES and _TAGS directories at the root for future use. From parente at cs.unc.edu Wed Jun 29 07:12:00 2005 From: parente at cs.unc.edu (Peter Parente) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Adapting based on property Message-ID: Hi, I'd like to use pyprotocols to adapt objects returned by a third party library to a set of interfaces specified in my system. The external library returns instances of type Accessible via a number of methods. The Accessible object is a client side proxy for some server side object. The Accessible object is a generic interface to a slew of different server side objects. The Role attribute of the accessible object says exactly what kind of object it is representing on the server side and, thus, what its various values mean. For instance, the Value property of an Accessible object with a Role of 'list' is the length of the list while the Value property of an Accessible object with a Role of 'tree' is the number of levels in the tree. What's I'd like to do is adapt instances of the generic Accessible objects to more specific interfaces based on the Role attribute. For example, I'd like to have IList protocol with a method of getLength. Then I'd like to have an adapters like AccessibleListViewAsList and AccessibleComboboxAsList that implement the IList interface for various list-like objects. Here's my problem. In the pyprotocols documentation, I only see a way to declare a class as an adapter for a type of object. But in this case, everything is an Accessible and so declaring two adapters causes an "ambiguous adapter choice" error: class AccessibleListViewAsList(Adapter): advise(instancesProvide=[IList], asAdapterForTypes=[Accessible]) def getLength(self): return self.subject.Value class AccessibleComboboxAsList(Adapter): advise(instancesProvide=[IList], asAdapterForTypes=[Accessible]) def getLength(self): return self.subject.child['list'].Value Notice the implementations need to be different so a single adapter will not suffice. I'm guessing what I really need is an adapter factory that does something like the following pseudocode: def accessibleAdapterFactory(obj): # get object role role = obj.Role # look for adapters for given role and type return protocol.registry.findAdapterFor(role=role, type=type(obj)) Is such a thing possible with pyprotocols? Is there another solution? Thanks, Pete From pje at telecommunity.com Wed Jun 29 09:16:14 2005 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Adapting based on property In-Reply-To: Message-ID: <5.1.1.6.0.20050629090725.02818b50@mail.telecommunity.com> At 07:12 AM 6/29/2005 -0400, Peter Parente wrote: >I'm guessing what I really need is an adapter factory that does something >like the following pseudocode: > >def accessibleAdapterFactory(obj): > # get object role > role = obj.Role > # look for adapters for given role and type > return protocol.registry.findAdapterFor(role=role, type=type(obj)) > >Is such a thing possible with pyprotocols? Yes. One way is to define the adapter function as a generic function (using the current CVS HEAD of PyProtocols): import dispatch @dispatch.generic() def createIList(obj): """Generic function for adapting Accessible""" @createIList.when("obj.Role==ComboBox") def createComboBoxAsList(obj): return AccessibleComboboxAsList(obj) >Is there another solution? Yes; instead of adapting, you can use generic functions to perform operations like getLength() directly, e.g.: @dispatch.generic() def getLength(obj) pass @getLength.when("obj.Role==ListView") def listViewLength(obj): return obj.Value @getLength.when("obj.Role==ComboBox") def comboBoxLength(obj): return obj.child['list'].Value Of course, generic functions will do the check on every call (but using a hash table lookup if your obj.Role values are hashable), and adapters need only be created once. There are other tradeoffs regarding code clarity as well; sometimes interfaces are clearer for a given use case, sometimes generic functions. From psucorp at grinchcentral.com Wed Jun 29 09:12:11 2005 From: psucorp at grinchcentral.com (Erik Rose) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Moving to Subversion In-Reply-To: <5.1.1.6.0.20050628211324.0253c428@mail.telecommunity.com> References: <5.1.1.6.0.20050628211324.0253c428@mail.telecommunity.com> Message-ID: <8CE8766D-398C-4FD2-AA96-5801B694180B@grinchcentral.com> On Jun 28, 2005, at 11:50 PM, Phillip J. Eby wrote: > except maybe to check whether Trac makes any assumptions about your > repository layout (e.g. that branches/trunk/tags crap) > It doesn't. It doesn't have any concept of branches or tags at all. Cheers, Erik From parente at cs.unc.edu Wed Jun 29 09:21:53 2005 From: parente at cs.unc.edu (Peter Parente) Date: Fri Jan 2 21:59:49 2009 Subject: [PEAK] Adapting based on property In-Reply-To: <5.1.1.6.0.20050629090725.02818b50@mail.telecommunity.com> References: <5.1.1.6.0.20050629090725.02818b50@mail.telecommunity.com> Message-ID: Neat. Is there any doc about the new features in CVS HEAD? Thanks, Pete On Wed, 29 Jun 2005 09:16:14 -0400, Phillip J. Eby wrote: > At 07:12 AM 6/29/2005 -0400, Peter Parente wrote: >> I'm guessing what I really need is an adapter factory that does >> something >> like the following pseudocode: >> >> def accessibleAdapterFactory(obj): >> # get object role >> role = obj.Role >> # look for adapters for given role and type >> return protocol.registry.findAdapterFor(role=role, >> type=type(obj)) >> >> Is such a thing possible with pyprotocols? > > Yes. One way is to define the adapter function as a generic function > (using the current CVS HEAD of PyProtocols): > > import dispatch > > @dispatch.generic() > def createIList(obj): > """Generic function for adapting Accessible""" > > @createIList.when("obj.Role==ComboBox") > def createComboBoxAsList(obj): > return AccessibleComboboxAsList(obj) > > >> Is there another solution? > > Yes; instead of adapting, you can use generic functions to perform > operations like getLength() directly, e.g.: > > @dispatch.generic() > def getLength(obj) > pass > > @getLength.when("obj.Role==ListView") > def listViewLength(obj): > return obj.Value > > @getLength.when("obj.Role==ComboBox") > def comboBoxLength(obj): > return obj.child['list'].Value > > Of course, generic functions will do the check on every call (but using > a hash table lookup if your obj.Role values are hashable), and adapters > need only be created once. There are other tradeoffs regarding code > clarity as well; sometimes interfaces are clearer for a given use case, > sometimes generic functions. >