Fixes#12710.
Signed-off-by: Augustin Cavalier <waddlesplash@gmail.com>
I fixed the modifications to the Jamfiles in src/bin, they were all wrong
in the patch.
B{Abstract,Datagram,Secure}Socket:
- Add functionality to listen for and accept new connections, thus allowing
one to use the socket classes for server functionality as well.
BSecureSocket:
- Adjust to take into account differences between how SSL needs to be called
when accepting an incoming connection vs initiating an outbound one.
The handshake on the accepted connection stills fails for unknown reasons
at the moment though.
Note that these changes break the ABI, and thus any packages making use of
them directly will need a rebuild.
* When using a proxy, HTTPS connexion must still go directly to the
target website. The proxy can then act as a TCP stream relay and just
transmit the raw SSL stream between the client and website.
* For this, we ask the proxy sending an HTTP request with the CONNECT
method. If the proxy supports this, we can then send anything as the
payload and it will be forwarded.
* Untested, as the network here in Dusseldorf doesn't let me use a
proxy.
ticket : #10973
When an HTTPS request uses an SSL certificate that OpenSSL considers
untrusted, and the user decides to continue anyway, add the certificate
to an exception list. Match certificates against this list and don't ask
the user again if they are already there.
Fixes#12004. Thanks to markh for the initial patch and peeking into the
WebKit code!
netresolv (and libbind) won't cache DNS requests, which can result in a
lot of DNS requests being made for the same host. Implement a simple
cache in RAM (local to each application) which will keep the most
recently requested addresses cached. This can speed up loading of an
HTTP page a lot, by saving a DNS request for each resource stored on the
same server as the main page.
The BNetworkRoute class manages a route_entry and the sockaddr's
associated with it. It replaces the direct use of route_entry in the
BNetworkInterface API.
Using route_entry is fragile and inconvenient as it only holds pointers
to the sockaddr's. When getting a list of routes from the kernel, each
route_entry is set up so that its pointers point into the single flat
buffer that is passed around. Creating a copy of the route_entry and
then deleting the flat buffer makes the pointers in the copy stale.
Returning these route entries therefore always lead to a use-after-free
when they were eventually used.
BNetworkRoute also takes over the code and functionallity of getting
routes from RouteSupport. The corresponding method in BNetworkRoster is
replaced by a static method in BNetworkRoute.
Also distinguish between the default route and gateway of an interface.
GetDefaultRoute() now gets the default BNetworkRoute for the interface
while GetDefaultGateway() gets the associated gateway address within
that default route. Adjust network preferences panel to this change.
Note that we currently only seem to have per interface default routes
and not an actual global default route. This was already the case before
these changes and I did not further investigate what this means.
This reverts commit 31ea76548a.
Adrien, please try again without clobbering the otherwise nice
BNetworkInterface API!
Conflicts:
src/kits/network/getifaddrs.cpp
* BNetworkInterfaceAddress is moved to libnetwork. It is modified to not
use BNetworkAddress (which is in libbnetapi) and instead use sockaddr
and sockaddr_storage directly. All callers are adjusted to this.
* Some support code is shared between BNetworkInterface and
BNetworkInterfaceAddress, move it to libnetwork but in the BPrivate
namespace.
* Make it possible to extract more useful data from the certificate
* Also get the OpenSSL error message when a certificate can't be
validated. Send it to the verification failure callback so it can be
shown to the user.
* Since DNS are normally restricted to ASCII, the use of UTF-8 in domain
names is implemented using a "punycode" encoding.
* The request to the DNS server must be sent with the ASCII
representation of the domain name, however the Unicode one should be
used for user-visible parts.
* ICU provides an implementation of the conversion, which we use here.
* Conversion is currently done in-place and modifies the BUrl object
(this is similar to UrlEncode/UrlDecode).
* Adjust existing IDN test to make use of these methods. It's passing
now.
* Move default context management to BUrlRequest since some code
(including the testsuite) bypass the BUrlProtocolRoster.
* Introduce proxy host and port in BUrlContext
* Have BHttpRequest use the proxy when making requests
* Remove unneeded field fOutputHeaders and convert it to a local for the
only method that uses it,
* Don't return EOVERFLOW when flushing data from ZLib (the ZLib
decompressor returns this, but zlib docs states that this is NOT an
error condition).
* Replace unneeded temporary BNetBuffer of fixed size with BStackOrHeapArray.
* receiveEnd is set in a different place in case of chunked transfers,
which would cause the decompressor to never be flushed.
* In the case of chunked transfers, we call Flush() without any input
data (to flush only whatever is remaining in the decompression buffer).
This causes ZLib to return Z_BUF_ERROR which is translated to
B_BUFFER_OVERFLOW. This is a non-fatal error and is expected behavior in
that case. Don't handle this as an error, and do use the extracted data.
Fixes various cases of missing the last chunk of a page (pastie.org,
Google search results, and more).
* Each BHttpAuthentication object is locked on all field accesses,
* They are owned by the BUrlContext and never deleted, so there is no
need for reference-counting them,
* The BUrlContext itself is now reference counted, and all BUrlRequests
hold a reference to it.
This makes sure using the BHttpAuthentication objects from requests is
thread-safe.
* Change the semantics of the iterators copy constructor and assignment
operator: they now return a new iterator for the same cookie jar (and
same url for the UrlIterator). They don't try to point to the same
position as the copied iterator. The only purpose of these is to write
code such as:
Iterator it = jar.GetIterator();
so having a full copy isn't that useful.
* The per-domain cookie lists are now protected with a read-write lock.
The iterators retain a read lock while they are handling cookies from
that list. They get a write lock when doing Remove. Adding a cookie to
the jar also gets the write lock for the matching list
* Fix a memory leak when adding a new domain-list to the jar failed
* Simplify the declaration of the PrivateHashMap type (it would be
even simpler if HashMap was a public API)
* The domain hashmap is now a SynchronizedHashMap. It is locked as long
as an Iterator or UrlIterator exists, which may be a problem as these
are public APIs. Writing safe iterators for an hashmap with concurrent
accesses is not easy, so the API could be modified to return a list of
domains and a list of cookies for a given domain or URL instead. This
would suit the intended uses just as well.
* The jar now store const cookies, so there is no need to lock them for
access/modification. Updating a cookie is done by replacing it with
another one in the jar (with the same domain and value). There is still
the problem of deleting a cookie while other threads may still access
it, this will be fixed by making cookies BReferenceable.
These were getting out of sync and causing trouble, and they are easy to
compute from existing information.
Fixes some problems detected by the testsuite where the user/password or
the host would sometime disappear from the URL.
* The DataReceived hook gets a position argument, making it possible for
listeners to handle out-of-order data (from two range requests at
different positions, for example)
* Adjust HaikuDepot (only user of the API in our sources)
* Add a copy constructor to HTTPRequest that copies the relevant
parameters from an existing request. Makes it easy to repeat a request
with a different range. Could be useful for restarting downloads, or
paralellizing them.
* Add SetRangeStart, SetRangeEnd calls to HTTPRequest, no implementation
yet. I'm putting all the API changes in this commit as it needs to be
synced with a matching haikuwebkit release.
* All archs must update to HaikuWebkit 1.3.0. Previous versions are
broken by this.
* BSecureSocket::CertificateVerificationFailed() took a BCertificate
instance by value as parameter.
BCertificate deletes internal data in its destructor. Passing an
object by value creates a copy, so the copy attempted to delete
the internal data again during its destruction.
This caused mail_daemon to crash here when it came across a failed
certificate.
* Fix: pass BCertificate object as reference.
* Instead of creating an OpenSSL context ofor each socket, use a global
one and initialize it lazily when the first SecureSocket is created
* Load the certificates from our certificate list so SSL certificates
sent by servers can be validated.
* Add a callback for signalling that certificate validation failed, the
default implementation proceeds with the connection anyway (to keep the
old behavior).
* Introduce BCertificate class, that provides some information about a
certificate. Currently it's only used by the callback mentionned above,
but it will be possible to get the leaf certificate for the connection
after it's established.
Review of the API and implementation is welcome, before I start making
use of this in HttpRequest and WebKit to allow the user to accept new
certificates.
Use standard error codes instead.
This allows using error code returned by the underlying functions
directly, and makes it possible to use strerror for debugging. So, we
can also remove StatusString() from the various *Request classes.
When calling Stop(), we expect the request thread to exit as soon as
possible. Closing the connection unlocks it from any blocking read() or
write(), avoiding some lockup situations.
* BUrlResult is back, with ContentType and Length methods.
* BHttpResult subclasses it and use HTTP header fields to implement
those
* Introduce BDataRequest for "data" URIs. These embed the data inside
the URI, either as plaintext or base64 encoded.
We can send the data directly to the output socket instead of copying it
into a BString first, at the cost of very slightly less information in
debug output.
When using the copy constructor of BNetEndpoint the socket of the
original endpoint gets dup'ed. The Accept() method later directly reset
the fSocket member of the newly created BNetEndpoint to the socket
returned by accept(). The socket dup'ed by the copy constructor was
therefore leaked.
Of course dup'ing the socket and copying the local and remote addresses
is superfluous in the accept case, as these members all get set to new
values. To reduce that overhead there is now a new private constructor
that directly gets the final socket and remote and local address.