Version 1.5.1 includes some of our own fixes, reducing our delta
to upstream.
These should not be needed now:
- 2cbb337756
Squash harmless Clang warning introduced in Duktape 1.5.0.
- 8f8cda2b48
Fix Duktape on AmigaOS3 (thanks to Tygre and Sami)
Libjpeg used in NetSurf for decoding of JPEG images handles exceptions using a
pair of non-local jump functions: setjmp() and longjmp(). When a decompression
context is created via a call to the function jpeg_create_decompress() the
caller passes a structure jpeg_decompress_struct as a parameter. This structure
should has a validly initialized jump buffer, so the initialization or other
functions called in future can jump to the exception handling context.
The jpeg backend of NetSurf now initializes libjpeg mistakenly: jump buffer is
filled after the call to jpeg_create_decompress(). It results in jump to random
addresses in the case of exception caught during operation of the function
jpeg_create_decompress().
The patch moves the initialization of jump buffer before the call to
jpeg_create_decompress().
Signed-off-by: Sergei Rogachev <rogachevsergei@gmail.com>
bitmap file decoding is done at first call to redraw but was not
calling the modified callback at the correct time immediately after
decode so frontend image chnages were not being done. This caused
nsgtk to fail to apply its colour space fixups so red was swapped with
blue.
The avoids situations were we threw away the length, only for
the caller to have to strlen the returned string.
Note, there seems to be a case of the amiga front end writing
beyond end of allocation. Added a TODO for now.
When processing a x509 certificate chain from openssl it is necessary
to allow teh entire chain to be processed rather than halting
processing at the first certificate with an error. This allows errors
with a certificate authority to be examined.
The wallclock() API uses gettimeofday which can be affected by the the
systems clock being changed etc. The curl fetcher usage of this API is
to generate a timing delta and does not cope with the gettimeofday
issues.
This changes the fetcher to use the nsutils library monotonic time
function which does not suffer from the issues with gettimeofday.
The config header was causing many source files to unecessarily
include the dirent headers causing extra dependancies. This has been
fixed by providing a utility dirent header that provides a common API
across all platforms while removing the unecessary dirent header usage.
The utility configuration header dragged in a number of bsd sockets
and related API as a side effect of setting up the configuration. By
splitting the header and API setup into a separate header only the
small number of places that need the functionality explitly include
it.
The update to remove curl usage from urldb must pull in the utility
config header instead to get inet_aton and such or compoles on some
platforms fail.
currently NetSurf uses curl_getdate to convert textural date and time
strings into seconds since epoch. It is betetr to move this
functionality to a utility function so curl_getdate can easily be
replaced if required.
When the operations tables were created the browser table was renamed
to miscellaneous except the actual rename patch was never applied,
this fixes that situation.
The printf formatting for size_t is set in c99 as %zu but in windows
it is %Iu this is solved by adding and inttypes style PRI macro for size_t
This also uses this macro everywhere size_t is formatted.
If a fetcher returns with no data (no content or http error code 204)
the hlcache state machine was trying to mimesniff using non existent
header data and reporting the resulting NSERROR_NOT_FOUND as a
"BadType" message.
This changes the behaviour to be similar to that in the headers
received case where NSERROR_NOT_FOUND from the mimesniffing is not an
error.
This is an attempt to amelioriate the situation found in #2384 where
we see the cURL connect() failing to complete. Based on the pcap
from the bug log, we believe that RISC OS is likely failing to signal
the completion of the connection to cURL. As such, cURL times out.
This change permits retries of timed out connections in the hope that
a fresh socket FD might subsequently function correctly. The defaults
chosen mean that the previous behaviour of 30 seconds before timeout
is reported will remain the same, but in that time we will make 3 separate
attempts to connect the socket.
Any fetch start error was being reported as "out of memory" which was
clearly insufficient. Foe example bad urls (reported was file:// with
a missing /) were causing a warn_user with out of memory. This change
now at least causes a "bad url" message.
This changes the LOG macro to be varadic removing the need for all
callsites to have double bracketing and allows for future improvement
on how we use the logging macros.
The callsites were changed with coccinelle and the changes checked by
hand. Compile tested for several frontends but not all.
A formatting annotation has also been added which allows the compiler
to check the parameters and types passed to the logging.
On some OS the ftruncate operation can take some time so move it to
occour in the background maintinance operations instead of when data
blocks are initialy opened. This should improve browsing responsiveness.
It seems many filesystems are greatly more efficient if the block file
is allocated its entire extent once rather than trying to
continuously grown the file later.
The size of the block files is known at their creation time so this
change ensures they are grown to the full possible extent hence removing
future inefficient writes.
Add a new interface to the content to allow automaticaly scaled
content redraws. This is intended to replace the thumbnail_redraw
interface with something more generic.
The generic bitmap handlers provided by each frontend are called back
from the core and therefore should be in an operation table. This was
one of the very few remaining interfaces stopping the core code from
being split into a library.
The fetch API previously allowed for the caller to supply the storage,
this was never used and was preventing the refactoring necessary for
small black storage to be available.
Change to computing the element index from the flags passed to store
and fetch methods instead of passing the flags around and calculating
everywhere.
Additionally split out writing element of entry to file into distinct
function to make code clearer.
The content thumbnailers for each frontend were being provided the
contents url. This was only ever used to call the urldb thumbnail
setting API.
This changes it so the single callsite that passed a valid url adds
the bitmap to that url itself in desktop_history.c instead of forcing
every frontend to require the urldb API.
Additionally the old API could pass the url as NULL which was causing
asserts where this was not an expected parameter value. Because of
this this fixes bug #2286 which was also present in the monkey
frontend as both called nsurl_access() on the url without the NULL
check and caused an assertion.
This splits up a great deal of the win32 window code out from other
gui code. It also remove large quantities of unused and junk
variables and functions.
The low level cache deserialisation was leaving bad data in an low
level cache object in the error case. This fixes it so the object
state only gets modified on successful deserialisation of all the
metadata.
In order to calculate the writeout bandwidth we need to know how long
it took to write the data to peristant storage in addition to how much
was written.
By scheduling the control data to be maintained (entries index written
and headers updated) once activity occurs to update these control
structures rather than a single serialisation at browser exit the data
is more likely to be up to date and not lost on a crash.
The data scheme fetcher was over allocating the space for decoded
base64 encoded urls and not using the base64 API that allocated the
correct size storage.
Previously content handler debugging features were accessed by global
variables. This allows the setting of debugging parameters via a
content API giving per content control over debugging features.
Currently only used by the html content handler to toggle global
redraw debugging.
The frontends previously had to use an html renderer API to get the
encoding of a content. This also required the explicit checking of the
contents type rather than using the existing content API to abstract
this knowledge.
Update the API which allows frontends to acquire the page features
(images, link urls or form elements) present at the given coordinates
within a browser window.
By making this an explicit browser_window API and using the browser.h
header for the associated data structure with a more appropriate API
naming the usage is much more obvious and contained.
Additionally the link url is now passed around as a nsurl stopping it
being converted from nsurl to text and back again several times.
The die() API for abnormal termination does not belong within the core
of netsurf and instead errors are propogated back to the callers.
This is the final part of this change and the API is now only used within
some parts of the frontends
By using an error code return we can gracefully handle fetcher
registration faliures instead of just immediately aborting.
The curl handler was also cleaned up and documentation improved
as a side effect.
The netsurf.h header should *only* contain the registration, core
initialisation and finalisation methods. Version information is best
placed in its own header.
Also remove any unneeded inclusion of this header limiting it to
solely the places the relevant API is required.
This change updates the llcache to use the scheduler to notify users of the
llcache of events. This should be just as safe as before and is part of an
effort to remove hlcache_poll and llcache_poll eventually because fetchers
should schedule themselves if need-be.
This is a big change despite the diminutive nature of the patch. Please report
issues promptly if they turn up after this and are not visible before it.
Signed-off-by: Daniel Silverstone <dsilvers@netsurf-browser.org>
Reviewed-by: Vincent Sanders <vince@netsurf-browser.org>
This rationalises the path construction and basename file
operations. The default implementation is POSIX which works for all
frontends except windows, riscos and amiga which have differeing path
separators and rules.
These implementations are significantly more robust than the previous
nine implementations and also do not use unsafe strncpy or buffers
with arbitrary length limits.
These implementations also carry full documentation comments.
By skipping empty headers and correctly dealing with whitespace around
header names we store fewer entries with better adherance to allowed
values in http responses.
Added content interface for search.
Removed bw->cur_search search context. Desktop layer now does nothing
except pass search requests from front end onto the bw's current_content
via the content interface.
Search API reduced to a pair of functions at each level:
{desktop|content|html|textplain}_search
and
{desktop|content|html|textplain}_search_clear
Updated front ends to use simplified search API. Only tested GTK and RO builds.
These confine the search stuff to render/. However search still uses struct
selection. The handling for which is still spread over desktop/ and render/.
Also the render/search code itself still fiddles inside html and textplain
privates.
Keypresses now go via content interface.
Contents don't shove the selection object into browser windows any more.
Contents report selection existence by sending message.
HTML content keeps track of where selections in it exist.
Contents report whether they have input focus via caret setting msg.
Caret can be hidden (can still input/paste) or removed.
Consolidate textarea selection handling.
Make textarea report its selection status changes to client.
Various textarea fixes.
Changed how we decide when to clear selections, and give focus.
Sadly, this breaks path cookies on HTTPS sites. The correct
fix is to implement RFC6265 in full (probably replacing
urldb with something less complex, too).
This reverts commit 924f8844d4.
+ urldb API now takes URLs as nsurl, rather than string.
+ urldb internally stores full URLs with nsurl ref.
+ urldb internally stores schemes as lwc_string.
+ Load and save of cookies and URL file may be slower since
we now need to create a nsurl.
+ Everything else should be faster, and there should be much
less allocating/freeing and much less parsing of the same
url over and over again.
+ Updated urldbtest for new urldb API.
+ urldbtest now cleans up at the end
+ Added lwc_string itterator to end of urldbtest
+ Adding some broken URLs (such as http:domain/) will now
work, since nsurl fixes (http://domain/) them.