arenavm.c, trace.c, mpmst.h -- trace->preTraceArenaReserved, to show pre- and peak-vmem during collection
Copied from Perforce
Change: 170098
ServerID: perforce.ravenbrook.com
arenavm.c:
- M_whole, M_frac: print count of bytes as Megabytes
- diag on VMCompact after all client-requested traces, plus any others where we returned a chunk.
- show vmem change, and also trace cond, live / % / stuck(pip), notCond
mpm.c -- new "$3" format for 0-padding 3-char-wide field, for thousandths of a MB
zcoll.c -- try some new parameters for tests
diag.c -- just VMCompact diag
Copied from Perforce
Change: 170097
ServerID: perforce.ravenbrook.com
arenavm.c -- move chunk-return into new function "VMCompact".
(also, in VMArenaFinish, null out arena->primary in VMArenaFinish, so it is not left dangling).
arena.c, arenavm.c, mpm.h, mpmst.h, mpmtypes.h:
arena->class->compact: ArenaCompact, ArenaTrivCompact, VMCompact
trace.c -- traceReclaim calls ArenaCompact!
Copied from Perforce
Change: 170095
ServerID: perforce.ravenbrook.com
zcoll.c:
How to get rid of all the objects, so full collect really collects all automatic objects:
- Rootdrop() helps, but we can still retain a 1.2MB object;
- stackwipe() does not help much -- these unwanted ambig refs are being left on the stack by MPS code that runs between mps_arena_collect and the flip!
- therefore StackScan(0/1) to destroy stack+reg root before full collect: it's the only way to be sure.
Reproducibility:
- give Make() a random? switch, acted on by df() = diversity function, to allow bypass of rnd();
- ZRndStateSet, to set the seed for rnd()
Output:
- print_M: switchable Mebibytes or Megabytes (more useful, to be honest);
- get(): don't report message times, it messes up diffs.
testlib.c/h:
Reproducibility:
- fix rnd_state so a rnd_state getter is possible;
- testlib.h += rnd_state_t, rnd_state(), rnd_state_set(), rnd_state_set_v2()
trace.c: traceFindGrey diag: no newline please
Copied from Perforce
Change: 170093
ServerID: perforce.ravenbrook.com
tract.c -- fix more ChunkCache defects:
- drop never-read chunkCache->pageTableBase and pageTableLimit fields: they were used for ChunkOfSeg(), back when each SegStruct was actually a PageStruct is some chunk's PageStructTable; see VMArenaChunkOfSeg() in //info.ravenbrook.com/project/mps/branch/2001-08-13/trunk/src/arenavm.chttps://github.com/Ravenbrook/mps/issues/46
- there's no need for arena to initialise the chunk cache; this allows en/decache functions to be local to tract.c (ie. declared static)
Copied from Perforce
Change: 170083
ServerID: perforce.ravenbrook.com
arenavm.c -- on VMFree(), destroy any empty chunks (except the primary). (VMFree is not the ideal place to do it, but works for proof of concept).
tract.c -- fix ChunkCache defects:
- previously, if cache is empty (chunkCache->chunk == NULL) then other fields are *undefined*; but code looks at them anyway (!) without first checking chunkCache->chunk;
- change it (.chunk.empty.fields) so that, if cache is empty, other fields have defined values: cache-using code may look at them, and they are chosen so that no cache hit will occur.
--> this fixes crashing defect shown by changelist 170072
- AVERT(ChunkCache) in the many places it should be checked;
- use AVERT_CRITICAL in ChunkEncache, because it is called by ChunkOfAddr;
Also:
plan.txt -- notes on use of primary chunk.
config.h -- DIAG also in ci, please [Do not integ to master]
diag.c -- skip ChainCondemnAuto diag thanks
zcoll.c -- move printf announcing Destroying arena etc to just before, not just after, we do it.
Copied from Perforce
Change: 170082
ServerID: perforce.ravenbrook.com
arenavm.c: VMFree is okay for testing chunk-ret; though just sparePagesPurge() for now;
diag.c: show what we want for using zcoll to show chunk-ret:
VM_ix_Create/Destroy
TraceStart, excpet only briefly for dyn-crit (why=2) and not at all for minor
locus.c: no newline on "condemn gens" diag please
tract.c: ChunkDecache is BROKEN; just add AVER to catch this for now
vmix.c: VM_ix_Create_ok/VM_ix_Destroy (vmw3.c needs similar)
zcoll.c:
release after mps_arena_collect!!!
make, collect, make, collect, to show chunk-ret
10MB arena means many chunks
None of this is releaseable quality of course.
Copied from Perforce
Change: 170071
ServerID: perforce.ravenbrook.com
mps.h, mpsicv.c: implementation
mpsicv.c: new addr_pool_test(), to test them
w3gen.def: export them
walkt0.c: test them within mps_arena_formatted_objects_walk(). (Also checks against values passed to stepper function, and against what we expect).
tool/test-runner.py: add walkt0
Reference Manual: mps_addr_pool, mps_addr_fmt: as yet Undocumented.
Also: design/poolamc: clarifications; add dated attribution for statements by RHSK.
Copied from Perforce
Change: 169907
ServerID: perforce.ravenbrook.com
Changes:
- separate ArenaPoll and ArenaStep code paths;
- simplify ArenaPoll;
- loop calling TracePoll to catch-up;
- update pollThreshold correctly, depending on whether we have no work and are sleeping, or have work and are advancing the 'clock' by one unit. If there's no work, don't keep checking. Avoid multiple calls to ClockNow().
- ArenaStep should NOT change pollThreshold -- that's ArenaPoll's business. This means ArenaStep may advance, but not retard, trace work.
zcoll: 100MB is a more sensible arena size than 0.5 MB
test-runner.py: run zcoll; on w3i3, use m9 (=VC9.0) compiler
Copied from Perforce
Change: 169904
ServerID: perforce.ravenbrook.com
Drop mps_lib_callback_register from w3gen.def, and put it in new file mpslibcb.def, used only when building the MPS DLL (rule in commpost.nmk) which contains the mpslibcb stuff. Correct expgen.sh accordingly (even though it's not working). So mps-fns.def (produced by w3build.bat by copying w3gen.def) is now correct for Configura to use to re-export MPS functions static linked into a larger executable, and mpsdy.dll still correctly exports the mps_lib_callback_register function.
Copied from Perforce
Change: 169899
ServerID: perforce.ravenbrook.com
See design/poolamc for thorough documentation.
AMCBufferFill, for large requests (> 8 ArenaAligns) now gives precisely the requested size to the buffer, and immediately pads the rest [poolamc.c]. See #define AMCLargeSegPAGES 8 [config.h].
New PoolTraceEndMethod -- do end-of-trace work:
Tracer calls PoolTraceEnd() after reclaim, when the trace is TraceFINISHED [trace.c]. AbstractPoolClass uses PoolTrivTraceEnd -- a NOOP [mpm.h, mpmst.h, mpmtypes.h, pool.c, poolabs.c, pooln.c]. AMC overrides with AMCTraceEnd, to emit diagnostic about how well the trace went [poolamc.c].
DIAGNOSTICS:
AMCTraceEnd_pageret: reports page retention (currently gated to only emit if >= 100 pages are retained).
traceSetSignalEmergency: warn when a trace enters emergency mode.
DIAG buffer now 200K, and copes with overflow.
TESTS:
zcoll.c: Test allocation of mixed big and small objects. Test allocation of big object immediately followed by a retained small object, to test AMC LSP.
Copied from Perforce
Change: 169898
ServerID: perforce.ravenbrook.com
mps.h, mpsicv.c: implementation
mpsicv.c: new addr_pool_test(), to test them
w3gen.def: export them
walkt0.c: test them within mps_arena_formatted_objects_walk(). (Also checks against values passed to stepper function, and against what we expect).
tool/test-runner.py: add walkt0
Copied from Perforce
Change: 169861
ServerID: perforce.ravenbrook.com
ArenaPoll will still call TracePoll if clamped...
but TracePoll won't start a new trace if clamped
ArenaStep won't start an opportunistic full collect if clamped
ArenaStep won't advance pollThreshold, ever
traceFlip asserts that clamped is FALSE.
(see http://info.ravenbrook.com/mail/2010/02/23/13-19-24/0.txt)
---
But no, clamped is more complex than that.
- Certain mps.h calls affect it.
- Certain MPS tests use it for more control and reproducibility.
- MPS itself uses it, as part of starting a full collect, for which it must first run any current trace to completion without starting any new ones.
In particular, this set of changes asserts:
MPS ASSERTION FAILURE: ArenaGlobals(arena)->clamped == FALSE
trace.c
534
because mps_arena_collect() tries to start a full collect while clamped.
---
So this changelist is for historical interest only, and will be backed out.
Copied from Perforce
Change: 169853
ServerID: perforce.ravenbrook.com
Changes:
- separate ArenaPoll and ArenaStep code paths;
- simplify ArenaPoll;
- loop calling TracePoll to catch-up;
zcoll: 100MB is a more sensible arena size than 0.5 MB
Warning: barely "experimental" code quality, omitting the following necessities:
- consideration of interactions with ArenaStep,
- re-engineering of ArenaPoll and friends.
Copied from Perforce
Change: 169814
ServerID: perforce.ravenbrook.com
poolamc.c tidy up:
neater implementation of obj1pip (amcReclaimNailed)
neater implementation of amcResetTraceIdStats -- no need for a function any more
delete lots of obsolete temporary diagnostic (superseded by AMCTraceEnd_pageret)
a few more avers (especially on buffer empty)
also revert temporary diagnostic changes in arena.c, config.h, diag.c, global.c
diag.c: fix diag-buffer at 100 screenfuls (200000 chars).
zcoll.c: reinstate Make 50000 with occasional big objs.
Copied from Perforce
Change: 168688
ServerID: perforce.ravenbrook.com
mpmst.h: (comments only) clarify: after all reclaims
pool.c: PoolClassCheck: FUNCHECK(class->traceEnd)
pooln.c: traceEnd method in null pool class
Copied from Perforce
Change: 168687
ServerID: perforce.ravenbrook.com
(introduced in //info.ravenbrook.com/project/mps/branch/2009-03-31/padding/code/zcoll.chttps://github.com/Ravenbrook/mps/issues/4, changelist 168451)
Now that we have two root tables -- Ambig & Exact -- we had better store them in two separate variables! D'oh.
Bug: single variable "root_table" was used for Ambig, and then re-used for Exact. On clear up, only Exact was mps_root-destroyed, and leaving Ambig still around at mps_arena_destroy. In .variety.ci, after ControlFinish, a CHECKL(RingCheck(&arenaGlobals->rootRing)); tries to access the undestoryed root, even though the ControlPool has been unmapped. This causes seg fault. GDB works fine to show this, with bt.
Fix: two vars; call mps_root_destroy on each.
Copied from Perforce
Change: 168533
ServerID: perforce.ravenbrook.com