The test ensures that there's no error when HASH-TABLE-TEST is called on a hash
table with a custom equality function. The tests pass, with some caveats:
- I'm only about 70% sure that FINISHES is the right test-predicate to use for
something like this
- The test suite would consistently fail with non-deterministic segfaults while
testing the MULTIPROCESSING subtest. This could easily be due to the fact that
I'm using a FreeBSD machine, and don't have access to a Linux machine at the
moment -- though I'd be happy to re-run the tests when I do. The test suite
completed when I commented out the MULTIPROCESSING subtest from the ASD
file. I don't believe this would have any bearing on whether or not the hash
table tests should pass
Previously, we used the "tombstones" approach for removing entries
from a hash-table, that is buckets were marked as previously occupied
but currently empty. If the hash-table did not grow in size these
tombstones were never cleaned up, only reused for new entries.
For a general purpose hash-table this is a problematic strategy
because there are workloads where this significantly slows down any
attempted lookup or deletion for elements that are not contained in
the hash-table. This happens when new elements are added and deleted
regularly for a long time without the size of the hash-table changing.
Even if the hash-table is never filled above a small fraction of its
size, tombstones will eventually accumulate which an unsuccessfull
search for an element will have to skip over.
In the new implementation, we don't use any tombstones at all but
instead shift elements into more optimal buckets.
In the case where the hash-table in the old approach was free of
tombstones, the new approach is a little slower for the process of
deleting from highly occupied hash-tables itself but inserting into a
hash-table is a little bit faster on the other hand.
The function SORT-APPLICABLE-METHODS accepts as the third argument called
ARGS-SPECIALIZERS however this function assumed that the argument was a list
of argument's classes (i.e not EQL specializers) - see COMPARE-SPECIALIZERS.
This commit doesn't change the function signature but conses a new list that
is ensured to be a list of classes and passes them to COMPARE-METHODS.
(Local) functions COMPARE-METHODS, COMPARE-SPECIALIZERS-LISTS and
COMPARE-SPECIALIZERS have the argument name changed to reflect their true
expectations.
The function COMPARE-SPECIALIZERS takes the CLASS-PRECEDENCE-LIST of the class
of the argument to break ties when there is no direct relationship between
method specializers.
Previously a local function APPLICABLE-METHOD-P returned (values nil nil) when
it found an EQL-specializer where the object was of a matching class. This is
premature because some later specializer may make the method not applicable
based on one of the argument classes, so there is no need to resort to
COMPUTE-APPLICABLE-METHODS in such case.
Previously this function stored a list of elements
(cons list-or-random-atom argument-position)
ARGUMENT-POSITION was preasumbly stored because authors anticipated denoting
unspecialized arguments, however all positions were always filled because
unspecializer argument had non-nil specializer #<BUILTIN-CLASS T>.
LIST-OR-RANDOM-ATOM contained either a list of EQL specializers or a random
specializer from all method specializers. The second value was not clear.
This change simplifies the code and the interface, we additionally maintain
additional information. From now on the list stores elements:
(cons class-specializer-p eql-specializers)
CLASS-SPECIALIZER-P is either NIL or T denoting whether the generic function
has a method specialized on this argument on a class other than T.
EQL-SPECIALIZERS without change, is a list of all EQL specializers.
Previously we didn't call it due to bootstrapping issues, but now we convert
functions to methods after early methods are fixed up and their classes are
also updated, so we can. This fix improves conformance.
The most notable change applies to the file fixup.lsp. Functions destined to
be generic are converted to their final version.
Previously this conversion was done in a few steps in order to avoid issues
with infinite recursion in dispatch. We achieve this by assigning to these new
generic function a simplified discriminating function:
(lamda (&rest args)
(unless (or (null *clos-booted*)
(specializers-match-p args specializers))
(apply #'no-applicable-method generic-function args))
(apply old-function args))
The old function is also seeded as a primary method for the generic
function. This works correctly because functions have only one method so we
may directly call it (it is also a fine optimization strategy we do not
incorporate generally yet), and because the discriminating function will be
recomputed when other methods are added etc.
This way we may use these generic functions without issues directly with newly
redefined versions and the file is now ordered as follows:
- fixup early methods
- redefine functions to their final version
- convert functions to generics
- define missing methods
- implement the dependant maintenance protocol
After this file is loaded it is possible to use generic functions as usual.
Previously we've checked whether the new defstruct is compatible with the old
one like this:
(let ((old-desc (old-descriptions struct)))
(when (and old-desc (null (compat old-desc new-desc)))
(error "incompatible")))
This was to allow new definitions. This is incorrect, because allows first
defining a structure without slots and then adding some, like
(defstruct foo)
(defstruct foo xxx)
The new check verifies whether the structure is a structure and then compares
slot, so the verification is not inhibited when the first definition doesn't
have slots.
Moreover we now test for slot names being string= because:
a) initargs and functions ignore the package (so functions will be redefined)
b) we want to match gensymed slot names
This is compatible with what sbcl does.
On top of that check for duplicated names and signal an error if there are
such.
- we did not distinguish between classes that had no slots and classes that
had no been iniutialized - that led to incorrect class stamps
- structures had no initial class stamp matching their structure
- structures when slot names chagned had their stamp increased despite not
really changing
cmp.0090.funcall/apply-inline-and-number-of-arguments called compile on a
lambda that caused a call to cmperr - that made the console log when testing
cluttered.
Let the sign of zero determine from which side branch cuts are
approached, no matter whether we use C99 complex numbers or not.
Disable the (acosh -∞) test. This test fails with the new code, but
was supposed to be commented out anyway. In general, we don't
guarantee anything about infinity if complex numbers are involved.
Closes#661.
- don't assume that any keyword is an option
- don't process the same keyword twice
New behavior could be summarized in these two cases:
(restart-case t
(retry ()
:retired ; <- form
))
(restart-case t
(retry ()
:report report ; <- expression
:report "foo" ; <- form
:test test ; <- form
))
Fixes#666.
Previously c1make-var checked whether the symbol NAME is CONSTANTP, but
ECL expands symbol macros in CONSTANTP so this returned false positives.
A similar concern applied to the CMP-ENV-REGISTER-SYMBOL-MACRO-FUNCTION.
C1EXPR-INNER when encountered a symbol tried to yield C1CONSTANT-VALUE
for if it iwas CONSTANTP - this was correct except for that we didn't
pass the environment to the predicate and symbols weren't shadowed.
In this commit one function is added to the core - si:constp (with
similar purpose to si:specialp) and one function to the compiler -
constant-variable-p (similar to special-variable-p) and they are
appropriately used when necessary. A regression test is added.
Fixes#662.