Also, change load/store('l','s') to 'r','w' like the other arch.
>db{0}> machine watch/1 hostname
>Bad modifier
>db{0}> machine watch/s1 hostname
>add watchpoint 0 as ffffc00001087848
with the kernel's base VSID.
- In va_to_vsid(), always compute the VSID from the base VSID in the
pmap and the effective segment ID (ESID), rather than extracting it
from the pmap's segment register value for that ESID. Not only does
this make the code the same between OEA and OEA64, but is also lets
us compute the correct VSID for that pmap/ESID even if the cached SR
for that ESID currently contains something else, such as an I/O segment
mapping (as might be the case on a 601).
With this change, we can temporarily toggle between an I/O segment and
and HTAB-mapped segment if needed (e.g. when calling OpenFirmware on
a 601-based system).
With some drivers (at least radeon(4)), in some cases the driver
does not choose the resolution correctly. The options
DRM_MAX_RESOLUTION_HORIZONTAL and DRM_MAX_RESOLUTION_VERTICAL allow
limiting the maximum resolution in X and Y direction.
This check has been too quick and broke the lint build. Among others,
lib/libpuffs has -w included in LINTFLAGS, which means that the build
can fail even for new warnings, not only for errors.
libpuffs compares a uint16_t with constants from an unnamed enum type.
Since the enum type is completely unnamed (neither a tag nor a typedef),
there is no way to define a struct member having this type. This was a
scenario that I just didn't consider when I added the check to lint.
For now, disable the new check completely. The previously existing lint
checks stay enabled, including the one that warns about mismatched
anonymous enum types in the '==' operator, which is very similar to the
now disabled check.
Since unnamed enum types cannot be used in type casts, there is no
sensible way that this type mismatch could be resolved, without changing
the definition of the enum type itself, but that may be in a
non-modifiable header.
Therefore, comparisons with enum constants of unnamed types cannot be
sensibly warned about.
The previous 'casestmt' was wrong since a case label is not a statement
at all.
The previous 'swstmt' was overly short, and wrong as well, since it
represents only the 'switch (expr)' part, which is not a complete switch
statement. Same for 'ifstmt', 'whilestmt', 'forstmt'.
The previous word 'head' was not precise enough since it didn't specify
exactly where the head ends and the body starts. Especially for
handling the dangling else, this distinction is important.
No functional change.
This refactoring reduces the indentation of the code, as well as
removing any ambiguity as to which 'switch' statement a 'break' belongs,
as there are no more nested 'switch' statements.
No functional change.
It's strange that indent's own code is not formatted by indent itself,
which would be a good demonstration of its capabilities.
In its current state, I don't trust indent to get even the tokenization
correct, therefore the only safe way is to format the code manually.
Limited support for hardware watchpoint has been available for some time, but it
has not been working properly. In addition, it stopped working at the time of
the PTRACE support commit on 2018-12-13. This has been fixed to work correctly,
and also fixed to be practical by sharing hardware watchpoints and breakpoints
between CPUs on MULTIPROCESSOR.
Also fixed a bug that causes a malfunction when switching CPUs with
"machine cpu N" when entering ddb mode from other than cpu_Debugger().
I have confirmed that the CPU can be switched by "machine cpu N" and return from
ddb properly in each case where ddb is called triggered by ddb break/watchpoint,
hardware break/watchpoint, and cpu_Debugger().
- Background: ixgbe doesn't use common MCLGET() interface and use the
driver specific cluster allocation mechanism (jcl). The cluster is
pre-allocated with a fixed number and the current number per queue
is num_rx_desc * 2 (2048*2=4096). It's too small. It also has a problem
that the max length of the pcq which is used in the TX path is big
(4096). Example:
100M <----- [ixg0 ixg1] <----- 1G
2048 TX descs <--- 4096 pcqs <---- 2048 RX descs
If a machine forwards a traffic from 1G interface to 100M interface,
It would require 2048+4096+2048=8192 descriptors, but the current number
is 2048*2=4096. It's too small. Even if the both interface's link speed
is the same and only small number of packet is queued in the pcq, 4096
jcl is small because 2048(RX)+TX(2048)=4096. If jcl is exhausted, not only
forwarding from ixg1 to ixg0 is dropped, but also another forwarding path
from ixg1 to another interface(e.g. wm0) is also dropped. Sockets also
queue packets, so if a lot of sockets are used and/or a socket buffer
size is changed to bigger one, it'll also become a problem. If the jcl
is exhausted, evcnt(9) counter "ixgX qY Rx no jumbo mbuf" is incremented.
Example:
vmstat -ev | grep ixg1 | grep "no jumbo"
ixg1 q0 Rx no jumbo mbuf 0 0 misc
ixg1 q1 Rx no jumbo mbuf 0 0 misc
ixg1 q2 Rx no jumbo mbuf 141326 0 misc
ixg1 q3 Rx no jumbo mbuf 0 0 misc
- To solve this problem:
- Add new config parameter IXGBE_JCLNUM_MULTI and set the default to 3
(2048 * 3). The minimum number is 2. The total number of jcl per queue
is available with hw.ixgN.num_jcl_per_queue sysctl.
- Reduce the max length of the pcq() which is used in the TX path from
4096 to 2048.
- Reviewed by knakahara@ and ozaki-r@.
- TODO: Use MCLGET().