If a fetcher returns with no data (no content or http error code 204)
the hlcache state machine was trying to mimesniff using non existent
header data and reporting the resulting NSERROR_NOT_FOUND as a
"BadType" message.
This changes the behaviour to be similar to that in the headers
received case where NSERROR_NOT_FOUND from the mimesniffing is not an
error.
This is an attempt to amelioriate the situation found in #2384 where
we see the cURL connect() failing to complete. Based on the pcap
from the bug log, we believe that RISC OS is likely failing to signal
the completion of the connection to cURL. As such, cURL times out.
This change permits retries of timed out connections in the hope that
a fresh socket FD might subsequently function correctly. The defaults
chosen mean that the previous behaviour of 30 seconds before timeout
is reported will remain the same, but in that time we will make 3 separate
attempts to connect the socket.
Any fetch start error was being reported as "out of memory" which was
clearly insufficient. Foe example bad urls (reported was file:// with
a missing /) were causing a warn_user with out of memory. This change
now at least causes a "bad url" message.
This changes the LOG macro to be varadic removing the need for all
callsites to have double bracketing and allows for future improvement
on how we use the logging macros.
The callsites were changed with coccinelle and the changes checked by
hand. Compile tested for several frontends but not all.
A formatting annotation has also been added which allows the compiler
to check the parameters and types passed to the logging.
On some OS the ftruncate operation can take some time so move it to
occour in the background maintinance operations instead of when data
blocks are initialy opened. This should improve browsing responsiveness.
It seems many filesystems are greatly more efficient if the block file
is allocated its entire extent once rather than trying to
continuously grown the file later.
The size of the block files is known at their creation time so this
change ensures they are grown to the full possible extent hence removing
future inefficient writes.
Add a new interface to the content to allow automaticaly scaled
content redraws. This is intended to replace the thumbnail_redraw
interface with something more generic.
The generic bitmap handlers provided by each frontend are called back
from the core and therefore should be in an operation table. This was
one of the very few remaining interfaces stopping the core code from
being split into a library.
The fetch API previously allowed for the caller to supply the storage,
this was never used and was preventing the refactoring necessary for
small black storage to be available.
Change to computing the element index from the flags passed to store
and fetch methods instead of passing the flags around and calculating
everywhere.
Additionally split out writing element of entry to file into distinct
function to make code clearer.
The content thumbnailers for each frontend were being provided the
contents url. This was only ever used to call the urldb thumbnail
setting API.
This changes it so the single callsite that passed a valid url adds
the bitmap to that url itself in desktop_history.c instead of forcing
every frontend to require the urldb API.
Additionally the old API could pass the url as NULL which was causing
asserts where this was not an expected parameter value. Because of
this this fixes bug #2286 which was also present in the monkey
frontend as both called nsurl_access() on the url without the NULL
check and caused an assertion.
This splits up a great deal of the win32 window code out from other
gui code. It also remove large quantities of unused and junk
variables and functions.
The low level cache deserialisation was leaving bad data in an low
level cache object in the error case. This fixes it so the object
state only gets modified on successful deserialisation of all the
metadata.