PQR - A Simple Design Pattern for Multicore Enterprise Applications

| 6 Comments

There's an interesting article over on the Dr. Dobbs Code Talk blog; PQR - A Simple Design Pattern for Multicore Enterprise Applications. It documents a design that I'm pretty familiar with and one which has worked pretty well for me in the past (this project was built in this way, for example).

My variation on this idea is that it all tends to be in one process. Work items are passed from one 'processor' to another via queues and each processor can run multiple threads to process multiple work items in parallel. In simple systems you end up with a "pipeline" and work items flow from one end to another; more complex systems may be modelled as networks of processors. You can tune the system by adjusting the number of threads in each processor's thread pool and can also do things like having different processors run at different thread priorities (if you really want to). Since a work item is only ever being acessed by a single processor at a time, the data in the work item doesnt need any locking. If a processor needs to access data which can be shared (either by instances of a processor or by different processors) then normal locking is required but the situations where locking IS needed are greatly reduced.

I find it interesting that the Dr. Dobbs article points out that 'careful measurement is required'. I agree, this is one of those situations where it's vitally important to include performance monitoring (via perfmon counters?) from the outset. Unless you can see how many threads are active at each stage in the pipeline and how many work items are in each of the queues then you simply cannot tune the system in a meaningful manner.

6 Comments

Testing . . . posts keep failing with a particularly useless "Your comment could not be submitted due to questionable content" error message.

Screw it. I was going to post a response, including a link to my server-design article which discusses exactly this issue, but this comment system is too broken. I don't have another twenty minutes to spend fighting through that.

Jeff,

Sorry about wasting your time like that, the blacklist error message should be using a message which includes the word that prevented the comment from being posted; I guess it isn't doing that... I'll investigate.

The pattern that was matching and causing the problem was \.at

I've read your server design document and it's very useful, I think I've linked to it before somewhere, but it's well worth another link http://pl.atyp.us/wordpress/?page_id=1277 and, come to mention it, another read.

Once again, sorry for wasting your time. I may try and turn off the blacklist and hope that the captcha works well enough on its own...

Yup, definitely worth reading again.

Whilst my servers already deal reasonably well with the ideal of reducing memory copies (they tend to use reference counted buffers that the server reads into until a complete message is present), they fail the reduced context switch requirement somewhat (or at least the multiple stage queue ones do).

Therefore, the 'multiple stages without re-queuing' bit is something I need to think about and the cross thread pool active thread limiting would be useful as well. I tend to have a sequence of IOCPs with thread pools (small number of queues, small number of threads per queue) but a limit on the total number of threads running would be useful. I do have several servers which are, effectively, just async state machines whereby each successive protocol message moves the state machine; perhaps I need to think about merging the 'staged processing' design with the state machine design so that I can do staged processing on a single thread (where appropriate).

Thank you for the link, Len, and for the kind words. I'm sorry I sounded so angry. It's been a very long few weeks, and today promises to continue the pattern, else I'd comment at greater length.

No problem Jeff.

Leave a comment