Previously elementary types were considered to be (CONS SPECC TAG), but I want to
introduce additional slot information to them, so we define a structure for that
type. The representation a is list because MAYBE-SAVE-TYPES calls COPY-TREE. Also
DEFSTRUCT is not available yet.
Rename PUSH-TYPE to PUSH-NEW-TYPE and move it to a correct section in the file.
Previously CANONICAL-COMPLEX-TYPE accepted the specializer and that was not
consistent with other functions handling canonical types.
Rename REGISTER-INTERVAL-TYPE to CANONICAL-INTERVAL-TYPE because this function
may register numerous elementary types and return their bit-wise composition,
and rename REGISTER-ELEMENTARY-INTERVAL to REGISTER-INTERVAL-TYPE.
This allows us to remove the kludge from FIND-TYPE-BOUNDS - the parameter
MINIMIZE-SUPER was to allow registering ranges that are in a canonical
form (that is left-bound).
We don't register types that may be obtained by a composition of other
registered types to avoid fake aliasing.
The function MAKE-REGISTERED-TAG calls FIND-TYPE-BOUNDS to determine supertypes
that need to be updated with the new bit, and subtypes that need to be included
in the new tag.
Thiis procedure was bogus because it did not recognize equivalent types. That
lead to a situation, where synonymous types could have been added twice with
incorrect relation. Consider:
type A: 011
type B: 001
We add a type C that is equivalent to A and B is subtype (to both). With the old
method the result would be:
type A: 111
type B: 001
type C: 101
So if we had later queried wheter A is subtypep to C, then the answer would be
incorrectly NIL.
The bug was hidden by the fact, that CANONICAL-TYPE expands type aliases when
they are symbols, so we had never encountered a situation where equivalent types
had different names in *ELEMENTARY-TYPES*. This changes when we introduce the
new kingdom for CONS type, because the key is (CONS X Y), and symbols in type
names X Y are not expanded, so
(CONS (OR FIXNUM BIGNUM)) is not EQUAL to (CONS INTEGER)
This function is used by REGISTER-ELEMENTARY-INTERVAL and REGISTER-TYPE.
Additionally we drop the call to LOGANDC2 in the invocation of UPDATE-TYPE,
because FIND-TYPE-BOUNDS always does that for us (so it was redundant).
Also remove redundant (and unused) function BOUNDS-<.
It seems that some variables were rebound also in cmptype-arith.lsp -- to avoid
potential inconsistency we abstract away bindings as WITH-TYPE-DTABASE.
The type STRING was defined as an alias to (ARRAY CHARACTER (*)) and that was
inconsistent with the type definition for unicode builds, it should be:
(OR (ARRAY CHARACTER (*))
(ARRAY BASE-CHAR (*)))
Instead of relying on default value in gethash, we handle NIL separately and use
FOUNDP in the last case. That reduces code nesting and makes it more redable.
+BUILT-IN-TYPE-NIL+ and +BUILT-IN-TYPE-t+ are bottom and top types of the common
lisp type system. They were sometimes refered in the code as naked integers - we
change that by defining constants to better convey the meaning.
The issue was revealed by registering long (EQL LIST) elements as cons types --
essentially we've reached the frame size limit in the middle of the loop, the
frame was resized, but the pointer `first' was relative to the old frame base.
The solution is to reinitialize the pointer before each iteration.
The function CL:FINISH-OUTPUT called by accident ECL_FORCE_OUTPUT when used on
ANSI streams. That becames an issue when we call it on a two-way stream where
the output buffer was a gray stream with STREAM-FINISH-OUTPUT differing from
STREAM-FORCE-OUTPUT.
Sometimes a byte may be not within the character code range. In that case, when
we read the char, the system will signal a condition.
Alternatively (and that's the behavior before this commit) we could return the
character #\Nul. That was done by virtue of ECL_CHAR_CODE skipping tag bytes, so
the returned NIL was treated as 0.
Byte streams transcoding to :ucs-2 and :ucs-4 don't call ecl_set_stream_elt_type
effectively not initializing .byte_buffer. Moreover functions seq_in_read_byte8
and seq_out_write_byte8 assume the vector type to be an octet based, and they
increment the stream position and test for its limit according to that.
That means that ecl_binary_read_byte and ecl_binary_write_byte calls would
segfault when seq_in_read_byte8 and seq_out_write_byte8 are called.
Both conditions could be easily mitigated by initializing .byte_buffer manually
and fixing seq_*_*_byte8 functions to account for the byte size, but there is no
need for that, because for these streams we are not using
ecl_binary_*_byte
ecl_eformat_*_byte
so byte8 functions are not called and .byte_buffer is not used.
Previously sequence streams always needed to go through the eformat and binary
encoders and decoders -- if bytes were too big, then we couldn't create sequence
streams from them.
After this commit it is possible to pass a character stream or a byte stream and
use it as a bivalent stream without a roundtrip for encoding and decoding.
This finishes the commit that adds unread-byte and peek-byte functions to the
mix in that for bivalent stream UNREAD-BYTE will work for the subsequent
READ-CHAR and vice versa. This also caters to transcoding etc.
The .byte_stack is used only by files to:
a) unread a single octet when we use fallback LISTEN implementation
b) unread bytes that make a character when UNREAD-CHAR is used
The latter is important to transcode characters from one external format to
another (i.e see the test external-format.0003-transcode-read-char).
This commit improves the function unread-byte to do the same brinding bivalent
streams almost to parity with regard to that implementation (see next commit).
That makes the implementation of eformat cleaner, .byte_stack more
self-contained, and saves us consing new byte stack for sequence streams (where
it was simply ignored, not to mention not entirely correct - because we've used
a .byte_stack length to decrement the pointer position while the byte could have
more bits than one octet).
Other optimizations that could be done here:
- make the byte stack an adjustable vector to avoid consing on each unread
Previously we've stored in this field the last read char, while now we store
there the last unread char. This way we can't tell whether the last read char
was the same as the unread one, but on the other hand this way requires less
bookeeping and the code shape is similar to UNREAD-BYTE.
It was used to store bytes for unread, but we are going to change how unread
works, and we still can simply test for newline and encode behavior directly in
unread-char for newlines.
Instead of remembering the last unread object and its type, it simply yots down
the fact that something has been unread (and clears on read), and delegates the
question to the input stream.
We drop warying generic-read/write variants in favor of using binary encoders
introduced in earlier commits.
This will allow for unified handling of unread bytes and characters and
transcoding both in bivalent streams.
The byte buffer is used for encoding and decoding both characters and bytes.
Previously we've used a stack-allocated array, but this doesn't cut it when it
comes to binary streams, where the byte may be a "finite recognizable subtype of
integer" (c.f specification of OPEN), because then the array may have more
elements.
This will allow us to transcode characters to bytes and vice versa. This is
necessary to implement conductive UNREAD-BYTE and UNREAD-BYTE, but will allow us
to also add low-level parsers for binary objects in the future.
This is to allow working with sequence streams where the vector may change after
the stream has been created.
When the user specifies :END to be some fixed value, then we upkeep that
promise, but when :END is NIL, then we always consult the vector fillp.
Previously when we couldn't convert the vector element type to a character,
creating sequence streams failed even when we were expecting the binary stream.
From now on it is possible to vectors with upgraded types being any integer.
SEQ_{INPUT,OUTPUT}* -> SEQ_STREAM*
Don't use IO_STREAM_ELT_TYPE in sequences and define SEQ_STREAM_ELT_TYPE
instread to avoid ambiguity.
This is a cleanup that signifies similarities between both objects.
1. ecl_peek_char had outdated comment presumbly from before we've introduced
stream dispatch tables - that comment has been removed.
2. fix erroneous specializations
- of STREAM-UNREAD-CHAR
By mistake we had two methods specialized to ANSI-STREAM, while one were
clearly meant to specialize to T (in order to call BUG-OR-ERROR).
- of winsock winsock_stream_output_ops
stream peek char was set to ecl_generic_peek_char instead of
ecl_not_input_read_char
3. change struct ecl_file_ops definition
a) ecl_file_ops structure change order of some operations to always feature READ
before WRITE (for consistency)
b) we are more precise in dispatch function declaration and specify the return
type to be ecl_character where applicable
The function operates on base_string while if it was supplied with an extended
string then ecl_base_char array became ecl_character, and that lead to bad
copies. To fix it we ensure that the passes string is first coerced to cstring.