[Openmcl-devel] thread overview

Gary Byers gb at clozure.com
Sun Aug 22 21:20:12 PDT 2004



On Sun, 22 Aug 2004, Hamilton Link wrote:

> >> Is it the hemlock window thread, or does the main thread receive the
> >> event and pass it to the
> >> appropriate place?
> >>
> >
> > Key-down events that're directed to hemlock windows get passed to
> > that window's dedicated thread (the incoming NSEvent is mapped to
> > a hemlock KEY-EVENT structure.)  Hemlock commands that affect the
> > contents of the buffer and/or the selection currently do so by
> > invoking methods on the main thread; I think that this is probably
> > overkill.
> >
> > Other events (mouse-down/drag, etc.) related to hemlock windows get
> > handled in the Cocoa event thread.
>
> Redundant question... does this indirection create enough latency to
> limit the typing responsiveness and general redrawing speed of Hemlock
> editor windows?  I know the time is going somewhere, and this bounce
> seems it would be a likely candidate for investigation.  My impression
> is the state currently maintained by each window's native-thread-based
> process could be stored in some other way if someone took the time to
> work out what that data structure needed to be.  50wpm=300cpm=5cps so
> Hemlock should easily be able to keep astride of a 5Hz process
> (coding).
>
> h

All other things being equal, doing things in a single thread (and not
having to synchronize or switch contexts) would be faster than doing
those same things in two threads.  At this point, I'm more suspicious
of the things that're actually done in those threads than I am of the
synchronization/context-switch overhead.

Cocoa wants to view the buffer as a linear array of characters: it
wants to know how big that buffer is, what the nth character in the
buffer is, and what font/style attributes are in effect at a given
character position, and it wants insertions/deletions/modifications to
the buffer to be expressed in terms of linear buffer positions.

Hemlock wants to view the buffer as a set of doubly-linked lines;
positions within the buffer are usually expressed via "marks" (which
reference a character position within a line.)

It's certainly possible to map between marks and absolute positions,
but doing so naively can be expensive.  You once suggested that this
can be made less expensive via a caching mechanism, and the mechanism
in question helps a lot; unfortunately, the cache isn't used as often
as it should be and is sometimes invalidated unnecessarily, and this
adds up.

There are other sources of overhead: every change to the buffer gets
processed as an editing transaction (beginEditing/endEditing messages
bracket messages which denote single changes in the buffer's contents
and length.)  Since some commands (e.g., indentation) cause a lot of
changes, it may make more sense to queue more changes between the
begin/endEditing messages.  The editing change messages are handled
in the main thread; that's probably unnecessarily paranoid.  There's
some overhead in the context switch and synchronization; there's
also some overhead involved in marshaling parameters so that the
right kind of message can be posted to the main thread's event queue.

Just about every time the insertion point moves, code has to decide
whether or not it's necessary to blink matching parens.  Deciding
whether or not we're next to a paren isn't hard; deciding whether we're
in the middle of a string or comment and finding the matching paren if
not involves parsing the buffer.  The same parsing work is done every
time the insertion point moves; caching the parse results and interpreting
small, incremental (and common) movements would probably speed things up
a bit.

Etc., etc. ... Context-switching isn't free, but I think that a modern
CPU can switch contexts faster than you can type.  I don't think that
the fact that two threads are involved is as likely a culprit as the
fact that some of the things those threads are doing is grossly inefficient
is.





More information about the Openmcl-devel mailing list