vertical scrolling now works in a line-wise manner,
just like in rio(1), sam(1) and friends. horizontal
scrolling showed problems with some line widths
where they got cut before showing the last
characters.
finally, pressing LMB or RMB and swiping while going
through any of the blocks caused a storm of plumbs
and visibility toggling (when over the expander line).
this doesn't happen anymore.
Currently we use millisecond ticks for time spent in each function.
This is not good enough for modern machines where fast functions could
be completed in a handful of nanoseconds. Instead let us just use the
raw ticks and store the cyclecfreq in the output prof data. This
requires that we enlargen the time section in the data to 8 bytes,
which broke the assumptions for struct allignment and required a
slight refactor of the code used to read it in prof(1). Since this is
a breaking change, we've devised a small format to communicate the
version for future revision of this format and this patch includes a
modification to file(1) for recognizing this format. Additionally
some minor improvements were made across the board.
When a function calls itself, the execution slot of its child is now
just added to its own time. This makes conceptual sense and also
reduces a big cause of depth inflation.
Current profiling size was 128k, and causes more frustration than it
is worth as demand paging makes this cheap. Assuming 64 bytes at
worst per Plink this will use ~16M of virtual address space on 64bit
systems.
POWER does not provided subtract immediate functions and
instead rely on negative addition. It was such that the linker
was the one who would go through and rewrite these to be negative
but it really should be done in the compiler while we still have
the width information.
* Add a handful of 64 bit classifications to 9l, along with instruction generation for each.
* 9c should avoid generating immediate instructions for 64 constants.
* libmach should know about 9l's generation to present a better disassembly.
* libmach now properly displays MOVD for moves between registers on 64 bit.
This was leftover from before 6c was
in /sys/src/cmd, as the mkfile adds this
to the include path. Now that we have 6c,
this subdirectory is never used.
Commit 9f755671fb broke
webseeding with the last block.
The haveiece() call at the end was because the inner
is not calling havepiece() on the last block as it
does not take the piece length into account.
Now, instead, fix the inner loop, making the code
more setright foward so we call havepiece() on the
last block.
The transition time in the timezone info file is,
confusingly, in local time and not UTC, so we need
to translate it before we do the comparison.
While we're here, revert the Australian timezone
change that made the offsets UTC, and add some test
to make sure we get this right.
This global "Mss" MIB element does not really exists,
and it makes no sense as the MSS is negotiated
per connection.
Put the InLimbo in the statistics table.
In limbo() function, once tpriv->nlimbo
reaches Maxlimbo, we'd try to re-use
Limbo entries from the head of the hash
chain. However, theres a special case
where our current chain contains only
a single entry. Then Limbo **l; points
to its next pointer, and writing:
*l = lp; would just yield in the entry
being linked to itself, leaking it.
The for(;;) loop in limborexmit() was wrong,
as the "continue" case would not advance
the lp pointer at all, (such as when
tpriv->nlimbo reaches > 100), we'd stop
cleaning out entries.
Handle Fsnewcall() returning nil case,
have to free Limbo *lp as we just removed
it from the hash table.
Add tpriv->nlimbo as "InLimbo" at the
end of /net/tcp/stats.
We where allocating the dialid and acceptid using:
rand()<<16 + rand()
this gives a biased values as rand() retuns a 15-bit
number. Instead, use two calls to nrand() to get
the full 32-bit unsigned range.
the start generation was allocated by calling rand(),
which gives a value between 0 and 2^15.
Instead, make a newgen() function that returns a new
generation id in the full 32-bit range, but also
avoids 0 and Hangupgen special values.
Cleanup and make all helper functions static.
Using > causes the kbmap file to get truncated,
which resets to the default keymap and *THEN*
applies the new change. Which is probably not
what was intended.