[PEAK] The path of model and storage
Phillip J. Eby
pje at telecommunity.com
Wed Jul 28 14:17:54 EDT 2004
At 11:00 AM 7/28/04 -0700, Robert Brewer wrote:
>And if, in the process, you decided to make a cache container which:
>
>1. Accepts any Python object (i.e. - don't have to subclass from
>peak.something),
>2. Is thread-safe: avoiding "dict mutated while iterating" (probably
>with a page-locking btree),
>3. Indexes on arbitrary keys, which are simple attributes of the cached
>objects (both unique and non-), and
>4. Is no more than 4 times as slow as a native dict,
#2 just ain't gonna happen. Workspaces will not be shareable across
threads. (Or more precisely, workspaces and the objects provided by them
will not include any protection against simultaneous access by multiple
threads.) #4 is also impossible, if I understand it correctly, since
accessing just *one* attribute will in the typical case require *two*
dictionary lookups. So, unless you're not counting the time needed to
*generate* the keys, that's clearly a non-starter.
>..I'd (wash your car | walk your dog | do your taxes) for a year. :)
And based on those requirements, you'd be getting the better half of the
deal by far!
More information about the PEAK
mailing list