<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>Am 09.12.2013 um 21:23 schrieb Gary Byers <<a href="mailto:gb@clozure.com">gb@clozure.com</a>>:</div><br class="Apple-interchange-newline"><blockquote type="cite">Rumors that I've become distracted and forgotten about this are at least<br>slightly exaggerated.<br><br>Could you (Paul) please try to reproduce the problem in the current trunk (r15975 or later) ? I haven't been able to do so, but I often found<br>the problem harder to reproduce than you did.<br></blockquote><div><br></div><div>I did test the newest trunk (after runnnig (rebuild-ccl :full t), giving me Welcome to Clozure Common Lisp Version 1.10-dev-r15975M-trunk (LinuxX8664)!).</div><div><br></div><div>Unfortunately, the problem remains the same, I am getting the same two types of symptoms you are describing.</div><br><blockquote type="cite"><br>On Thu, 7 Nov 2013, Gary Byers wrote:<br><br><blockquote type="cite">[Sorry if I'm bombarding this list with a discussion that may not be of<br>wide interest. If you aren't interested in this and don't read this message,<br>rest assured that you aren't likely to be missing anything ...]<br><br>The bug that Paul Meurer found seems to have two classes of symptoms:<br><br>1) a hard crash into the kernel debugger, usually due to a GC function<br> (set_n_bits()) being confused about the number of elements in a vector-like<br> object. That's typically a sign of memory corruption (unsafe lisp code<br> storing outside the bounds of an object.)<br><br>2) a (small) variety of memory faults and type errors signaled as lisp conditions.<br> (Sometimes. these are followed by case(1) soon after.) These kinds of errors<br> can also be symptoms of the same kind of "memory corruption".<br><br>When it was first reported, I suggested enabling some consistency checks in<br>the GC that often find problems like this near the time they occur. Paul<br>said that he tried enabling those checks but they didn't seem to detect<br>any problems but the crashes and faults continued to occur.<br><br>I can reproduce the problems more reliably now than I'd been able to and can<br>look at things in GDB when they occur. One of my favorite class 2 symptoms<br>so far is:<br><br><blockquote type="cite">Error: The value #<A Foreign Pointer #x7FFFD6C15690> is not of the expected type MACPTR.<br>While executing: %GET-CSTRING, in process getstring1(71).<br></blockquote><br>which is sort of saying "that thing that should have been a foreign pointer<br>(aka MACPTR) wasn't of that type when I looked at it a few cycles ago. Never<br>mind the fact that it is of that type now." Of course that makes no sense,<br>and of course it "can't happen."<br><br>This is vaguely reminiscent of a situation that arises on machines that have<br>seperate I and D caches: we write some code to memory and execute it (without<br>properly synchronizing the caches first), and what actually gets executed<br>doesn't have anything to do with the code in memory. If that happens often<br>enough or has happened recently, it's possible to be suspicious of that issue;<br>otherwise, it's a very confusing situation.<br><br>So: some number of cores are running Paul's test case; they're doing speculative<br>memory accesses and doing things out-of-order and doing all of the things that<br>a modern CPU does (all the while maintaining the illusion - mostly - that things<br>are happening in "program" or "architectural" order. Some thread needs to<br>run the GC, so that thread suspends all other threads. When a thread is suspended,<br>its "context" (architecturally-visible register values) are store someplace where<br>the GC can find it; the GC will use those register values to identify reachable<br>objects and if it moves such objects it updates the register values in the saved<br>context info. When the GC finishes, it causes the suspended thread to resume<br>execution with the updated (architecturally visible) register values; the behavior<br>that we're seeing is as if the core that the thread was running on resumes the<br>speculative, out-of-order execution that it was doing (that was based on old<br>register values and we see things that can't happen happen.<br><br>Some x86 instructions force things to synchronize with the "architectural" state.<br>When the GC suspends a thread, it atomically increments a counter associated with<br>the thread and only actually does the OS-level suspension if that counter changed<br>from 0 to 1. Only one thread at a time can increment that counter (that didn't<br>used to be the case), but it still does an "atomic" incf of the counter because<br>the instructions involved are supposed to have this synchronizing effect. They<br>may do so, but the thread can resume its wacky out-of-order speculative execution<br>as soon as the counter has incremented and it won't actually be suspended until<br>some number of cycles later.<br><br>I've tried to force things to synchronize after all other threads have suspended<br>and haven't seen the problem(s) occur since that change. That's not conclusive<br>(since none of this is entirely deterministic), but it seems encouraging.<br><br>If this does explain the problems, it's a little odd that we haven't seen the<br>problem occur more often (assuming that we haven't.) I assume that a number<br>of things have to happen just right (or "just wrong") in order for the symptoms<br>above to occur.<br><br>I'll keep looking at this as time permits and if the results hold I'll try to<br>check in the change in the next few days.<br><br>On Thu, 7 Nov 2013, Gary Byers wrote:<br><br><blockquote type="cite">I didn't have any luck reproducing the problem on that Xeon (or on another<br>one that rme pointed out we had), but I think that I found a way to make<br>the problem occur much more reliably than it has been.<br>What I've been doing (based on your code) is something like:<br>(dotimes (i n) ; n is some large number<br> (dotimes (j m) ; m is proportional to the number of cores<br> (process-run-function ...)))<br>where each thread does 100000 iterations of a simple foreign function call.<br>What generally happens here is that we create threads faster than they can<br>run to completion; I think that then number of active threads gets up into<br>the 100s in some cases, and the term "active" is a bit of a misnomer, since<br>most of them sit idle while the few that get scheduled do enough consing to<br>trigger the GC, where we spend most of our time.<br>I changed the code to:<br>(let* ((sem (make-semaphore)))<br> (dotimes (i n)<br> (dotimes (j m)<br> (process-run-function whatever<br> (lambda ()<br> (dotimes (k 100000)<br> ..)<br> (signal-semaphore sem))))<br> (dotimes (j m) (wait-on-semaphore sem))))<br>e.g., to create M threads on each iteration of the loop and wait for them to<br>run to completion before creating more. Those M threads should spend most<br>of their time running (and entering or returning from foreign function calls<br>when the GC runs), and that seems to trigger the problem more reliably.<br>I'll try to look at this as time permits and I don't know how long it'll<br>take me to see the problem when I do, but I think that I can at least reproduce<br>the problem much more reliably than I've been able to.<br>On Mon, 4 Nov 2013, Gary Byers wrote:<br><blockquote type="cite">Thanks.<br>Coincidentally, <a href="http://clozure.com">clozure.com</a> crashed today; the hosting service moved it to<br>a new machine (an 8-core Xeon). Hmmm ...<br>On Mon, 4 Nov 2013, Paul Meurer wrote:<br><blockquote type="cite">I did do the experiment you proposed.<br>On the older Xeon 4-core machine, the crashes get still somewhat less frequent, but this?might be insignificant<br>because I didn't run enough iterations. A crash occurs not more than?every 50th iteration in average. Perhaps not<br>often enough for convenient debugging.<br>Here are the specs:<br>processor ? ? ? : 0<br>vendor_id ? ? ? : GenuineIntel<br>cpu family ? ? ?: 6<br>model ? ? ? ? ? : 15<br>model name ? ? ?: Intel(R) Xeon(R) CPU ? ? ? ? ? ?5140 ?@ 2.33GHz<br>stepping ? ? ? ?: 6<br>cpu MHz ? ? ? ? : 2327.528<br>cache size ? ? ?: 4096 KB<br>physical id ? ? : 0<br>siblings ? ? ? ?: 2<br>core id ? ? ? ? : 0<br>cpu cores ? ? ? : 2<br>fpu ? ? ? ? ? ? : yes<br>fpu_exception ? : yes<br>cpuid level ? ? : 10<br>wp ? ? ? ? ? ? ?: yes<br>flags ? ? ? ? ? : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat?pse36 clflush dts acpi mmx<br>fxsr sse sse2 ss ht tm syscall nx lm co<br>nstant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm<br>bogomips ? ? ? ?: 4659.22<br>clflush size ? ?: 64<br>cache_alignment : 64<br>address sizes ? : 36 bits physical, 48 bits virtual<br>power management:<br>On the 16-core machine (16 inc. hyperthreading), nothing seems to have changed. The?latter has two CPUs with these<br>specs:<br>Intel Xeon E5 4-Core - E5-2643 3.30GHz 10MB LGA2011 8.0GT/<br>or from /proc/cpuinfo:<br>processor ? ? ? : 0<br>vendor_id ? ? ? : GenuineIntel<br>cpu family ? ? ?: 6<br>model ? ? ? ? ? : 45<br>model name ? ? ?: Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz<br>stepping ? ? ? ?: 7<br>cpu MHz ? ? ? ? : 3301.000<br>cache size ? ? ?: 10240 KB<br>physical id ? ? : 0<br>siblings ? ? ? ?: 8<br>core id ? ? ? ? : 0<br>cpu cores ? ? ? : 4<br>apicid ? ? ? ? ?: 0<br>initial apicid ?: 0<br>fpu ? ? ? ? ? ? : yes<br>fpu_exception ? : yes<br>cpuid level ? ? : 13<br>wp ? ? ? ? ? ? ?: yes<br>flags ? ? ? ? ? : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov?pat pse36 clflush dts acpi mmx fxsr<br>sse sse2 ss ht tm pbe syscall nx?pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good?xtopology<br>nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx?smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2<br>x2apic popcnt aes?xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi?flexpriority ept vpid<br>bogomips ? ? ? ?: 6583.99<br>clflush size ? ?: 64<br>cache_alignment : 64<br>address sizes ? : 46 bits physical, 48 bits virtual<br>power management:<br>I also run the code on an AMD Opteron machine, but there, no crashes occur,?as far as I can see (after 50<br>iterations).<br><br> I tried running your example in an infinite loop on another Core-i7 machine.<br> After about an hour, it crashed in the way that you describe. ?I poked around<br> a bit in GDB but wasn't sure what I was seeing; the C code in the CCL kernel<br> (including the GC) is usually compiled with the -O2 option, which makes it<br> run faster but makes debugging harder.<br><br> I figured that things would go faster if debugging was easier, so I rebuilt<br> the kernel without -O2 and tried again. ?It's been running for over 24 hours<br> at this point without incident.<br><br> Aside from being yet another example of the famous Heisenbug Uncertainty<br> Principle, this suggests that how the C code is compiled (by what version<br> of what compiler and at what optimization settings) may have something to<br> do with the problem (or at least the frequency at which it occurs.)<br><br> I'm curious as to whether building the kernel without -O2 causes things to<br> behave differently for you. ?To test this:<br><br> $ cd ccl/lisp-kernel/linuxx8664<br> Edit the Makefile in that directory, changing the line:<br><br> COPT = -O2<br><br> to<br><br> COPT = #-02<br><br> $ make clean<br> $ make<br><br> If the problem still occurs for you with the same frequency that it's been occurring<br> on your Xeons, that'd tell us something (the the differences between the Xeon and<br> other x8664 machines have more to do with the frequency with which the problem<br> occurs than compiler issues do.) ?If that change masks or avoids the problem, that'd<br> tell us a bit less. ?In either case, if you can try this experiment it'd be good to<br> know the results.<br><br> If the processor difference remains a likely candidate, it'd be helpful to know<br> the exact model number of the (smaller, 4-core) Xeon machine where the problem<br> occurs (frequently) for you. ?Doing<br><br> $ cat /proc/cpuinfo<br><br> may list this info under "model name" for each core.<br><br> I've been able to reprouduce the problem twice on Core i7 machines in a few days<br> of trying, and it'd likely be easiest for me to understand an fix if it was easier<br> for me to reproduce.<br><br> On Thu, 31 Oct 2013, Paul Meurer wrote:<br><br> Am 31.10.2013 um 01:15 schrieb Gary Byers <<a href="mailto:gb@clozure.com">gb@clozure.com</a>>:<br><br> ? ? ?On Wed, 30 Oct 2013, Paul Meurer wrote:<br> ? ? ? ? ? ?I run it now with --no-init and in the shell, with no difference. Immediate<br> failure?with<br> ? ? ? ? ? ?:consing in *features*,<br> ? ? ? ? ? ?bogus objects etc. after several rounds without :consing.<br><br> ? ? ?So, I can't rant and rave about the sorry state of 3rd-party CL libraries, and<br> ? ? ?anyone reading this won't be subjected to me doing so ?<br><br> ? ? ?Oh well.<br><br> ? ? ?I was able to reproduce the problem by running your test 100 times,<br> I am not able to provoke it at all on the MacBook, and I tried a lot.<br><br> ? ? ?so apparently<br> ? ? ?I won't be able to blame this on some aspect of your machine. ?(Also unfortunate,<br> ? ? ?since my ability to diagnose problems that only occur on 16-core machines depends<br> ? ? ?on my ability to borrow such machines for a few months.)<br> I think you can do without a 16-core machine. I am able to reproduce the failure<br> quite?reliably on an older 4-core<br> machine with Xeon CPUs and SuSE, with slightly different code (perhaps to get the?timing<br> right):<br> (dotimes (j 100)<br> ? (print (ccl::all-processes))<br> ? (dotimes (i 8)<br> ? ? (process-run-function<br> ? ? ?(format nil "getstring-~a-~a" j i)<br> ? ? ?(lambda (i)<br> ? ? ? ?(let ((list ()))<br> ? ? ? ? ?(dotimes (i 500000)<br> ? ? ? ? ? ?(push (getstring) list)))<br> ? ? ? ?(print i))<br> ? ? ?i))<br> ? (print (list :done j))<br> ? (sleep 1))<br> If you really need a 16-core machine to debug this I can give you access to mine. :-)<br><br> ? ? ? ? ? ?My machine has 16 true cores and hyperthreading; I am running CentOS 6.0,?and a<br> recent CCL<br> ? ? ? ? ? ?1.9 (I did svn update +<br> ? ? ? ? ? ?rebuild of everything yesterday).<br> ? ? ? ? ? ?I also observed that the problem goes away when I replace the constant string<br> in?the<br> ? ? ? ? ? ?library by a freshly<br> ? ? ? ? ? ?allocated string:<br> ? ? ? ? ? ?char *getstring() {?<br> ? ? ? ? ? ?? int index;<br> ? ? ? ? ? ?? char *buffer = (char *)calloc(100 + 1, sizeof(char));<br> ? ? ? ? ? ?? for (index = 0; index < 100; index++) {<br> ? ? ? ? ? ?? ? ? buffer[index] = 'a';<br> ? ? ? ? ? ?? ? }<br> ? ? ? ? ? ?? buffer[100] = '\0';<br> ? ? ? ? ? ?? return buffer ;<br> ? ? ? ? ? ?}<br> ? ? ? ? ? ?One should expect the strings in the Postgres library to be freshly allocated,<br> but<br> ? ? ? ? ? ?nevertheless they behave like<br> ? ? ? ? ? ?the constant string example.<br><br> ? ? ?It's unlikely that this change directly avoids the bug (whatever it is); it's more<br> ? ? ?likely that it affects timing (exactly what happens when.) ?I don't yet know what<br> ? ? ?the bug is, but I think that it's likely that it's fair to characterize the bug<br> ? ? ?as being "timing-sensitive". ?(For example: from the GC's point of view, whether<br> ? ? ?a thread is running Lisp or foreign code when that thread is suspended by the GC.<br> ? ? ?The transition between Lisp and foreign code takes a few instructions, and if<br> ? ? ?a thread is suspended in the middle of that instruction sequence and the GC<br> ? ? ?misintrprets its state, very bad things like what you're seeing could occur.<br> ? ? ?That's not supposed to be possible, but something broadly similar seems to be<br> ? ? ?happening.)<br> --?<br> Paul<br>--?<br>Paul<br></blockquote>_______________________________________________<br>Openmcl-devel mailing list<br><a href="mailto:Openmcl-devel@clozure.com">Openmcl-devel@clozure.com</a><br>http://clozure.com/mailman/listinfo/openmcl-devel<br></blockquote>_______________________________________________<br>Openmcl-devel mailing list<br><a href="mailto:Openmcl-devel@clozure.com">Openmcl-devel@clozure.com</a><br>http://clozure.com/mailman/listinfo/openmcl-devel<br></blockquote>_______________________________________________<br>Openmcl-devel mailing list<br><a href="mailto:Openmcl-devel@clozure.com">Openmcl-devel@clozure.com</a><br>http://clozure.com/mailman/listinfo/openmcl-devel<br><br><br></blockquote></blockquote></div><br><div>
<span class="Apple-style-span" style="border-collapse: separate; font-family: 'Lucida Grande'; border-spacing: 0px; ">-- <br>Paul</span>
</div>
<br></body></html>