We have to ensure that we do the putep() loop
only once for detach, so serialize the state
transition using ep0 qlock().
Furthermore, once the state is Ddetach, we
must ensure never to set it to something else
(such as Dreset or Denabled).
usbid's where globally allocated with a generation counter,
but it would not free usbid's when freed out of order
resulting in overflow.
instead, we use a different scheme, where we allocate the
next higher id until we run out and then allocate the next
lowest id.
properly maintain epmax as well when putep() when out of
order.
make newdev() and newdevep() return the new Ep* with a
reference taken, preventing someone from freeing the ep
under us.
fix the locking, so once we release the epslock, all endpoints
have the ep->dev set properly and remove impossible checks.
remove the annoying "dump" ctl that spams the console.
The test just called date twice assuming they all
execute in the same second. This causes false positives
with the following errors (usually just 1 second
difference):
term% while(){./zones.rc}
/adm/timezone/US_Arizona Sun, 06 Oct 2024 09:09:12 -0700 1728230953 1728230952 are not equal
/adm/timezone/Uruguay Sun, 06 Oct 2024 14:09:17 -0200 1728230958 1728230957 are not equal
/adm/timezone/Japan Mon, 07 Oct 2024 01:09:19 +0900 1728230960 1728230959 are not equal
/adm/timezone/Iran Sun, 06 Oct 2024 19:39:25 +0330 1728230966 1728230965 are not equal
/adm/timezone/Australia_West Mon, 07 Oct 2024 00:09:27 +0800 1728230968 1728230967 are not equal
/adm/timezone/US_Eastern Sun, 06 Oct 2024 12:09:29 -0400 1728230970 1728230969 are not equal
/adm/timezone/GMT Sun, 06 Oct 2024 16:09:31 +0000 1728230972 1728230971 are not equal
/adm/timezone/local Sun, 06 Oct 2024 18:09:34 +0200 1728230975 1728230974 are not equal
/adm/timezone/Mexico_BajaSur Sun, 06 Oct 2024 09:09:36 -0700 1728230977 1728230976 are not equal
The fix is to get the current time once, with date -n
and then pass that to date to format the time and
then concert back and compare.
remove the global statistics counters from taslock.c
as they'r not particularily usefull nor precise
and just cause unneccessary cache traffic.
if we want them back, we should place them into
the Mach structure.
also change the lock() function prototype to return void.
We cannot use lock() from screenputs() because lock calls
lockloop(), which would try to print() which on very slow
output (such as qemu) can cause kernel stack overflow.
It got triggered by noam with his rube-goldberg qemu setup:
lock 0xffffffff8058bbe0 loop key 0xdeaddead pc 0xffffffff80111114 held by pc 0xffffffff80111114 proc 339
panic: kenter: -40 stack bytes left, up 0xffffffff80bdfd00 ureg 0xffffffff80bddcd8 at pc 0xffffffff80231597
dumpstack
ktrace /kernel/path 0xffffffff80117679 0xffffffff80bddae0 <<EOF
We might want move this locking logic outside of screenputs()
in the future. It is very similar to what iprint() does.
git/save gets a list of paths (added or removed)
passed to it, and we have to ALWAYS stat the
file in the working directory to determine the
effective file-type.
There was a bug in the "skip children paths"
loop that would compare the next path element
instead of the full path prefix including
the next element.
reproducer:
git/init
touch a
git/add a
git/commit -m 'add a' a
rm a
mkdir a
touch a/b
git/add a/b
git/commit -m 'switch to folder' a a/b
For handling route invalidations, we have to allow
short bursts of traffic. Therefore we keep track
of the number of ra's received in the ra interal
and only start dropping packets when reaching 100
packets.
No idea who committed this in 2022 as its "glenda@9front.local",
but as qid.vers is incremented for each write and we definitely
should not use it as the cache tag.
Also, the initial code was stolen from du.c as the comment says,
and that one does the right thing.
We want to run test before we do the installation
into the system.
So do a temporary install into test/$cputype.git/
direcotry and bind it on /bin/git, that way,
all the scripts run the local source version.
When skipping objects, we need to process the full queue,
because some of the objects in the queue may have already
been painted with keep. This can cost a small amount of time,
but should not need to advance the frontier by more than
one object, so the additional time should be proportional
to the spread of the graph.
the previous bug wasn't a missing clamp, but a
mishandling of the 1-based closed intervals that
we were genrating internally, and some asserts
that assumed open intervals.
Before we would refuse to recurse, but would still give
a response with hints back. Some nefarious clients will interpret the
lack of a Refused response code as us being an open resolver.
When clunking a Fid while the file-system is read
only, dont just free the Amsg, but also drop the
references to dent and mnt.
Make clunkfid() nil fid->rclose, so no reuse
after free is possible.
Make clunkfid() always set the return pointer,
avoid missing prior initialization.
Do not abuse fidtab lock for serializing
clunking.
The clunk should serialize on Fid.Lock
instead, so add a canlock check here.
The lock order is strictly:
Fid.Lock > Conn.fidtab[x].Lock