Some websites set cookies expiring in the (not so) far future, after year 2038.
So, using time_t to store the cookie expiration date won't do. Use the
BDateTime class instead.
This makes goodsearch.com login work again (#10460).
Sometimes an HTTP request would get stuck in a call to connect(). While
our read() and write() implementations are interrupted if you close()
the socket, our connect() isn't. It would nonetheless abort the
connection process, and connect would stay blocked for quite a long
time.
When navigating away from a page, WebKit closes all pending connections,
and in our network backend this also means freeing the request object.
But, the destructor can't be called until the thread has stopped
running, to ensure proper cleanup.
Avoid the lockup in connect by sending a signal (SIGUSR1) to the thread
we want to unlock. This interrupts the syscall, and the thread exits
immediately.
* The PLL need some time to stabilize and generate a reliable pixel
clock
* Enabling the display pipe before that happens leads to some frames
with bad timing. This may confuse some VGA displays into entering
"unsupported mode" error.
Fixes#10271. Thanks for investigating this!
Iterate through each BFS volume and find vector icon data in apps
using a BFS query. Cap off at 128 total icons, we could get more if
desired. Default install comes with ~100 at present.
Make sure we have at least 20 icons (15 display at a time) before we
start drawing icons.
Rename VectorIcon to vector_icon and treat it like the POD struct that it is.
Also add a type_code parame to vector_icon that should be set to
B_VECTOR_ICON.
Some style fixes also included, e.g. turned defines into static const int32’s.
* as Ingo has pointed out, the remote user settings doesn't
relate to the build configuration at all, so setting the
remote user via HAIKU_REMOTE_USER in UserBuildConfig or
via shell environment is the way to go
* additionally: drop debug output
Now that DrawingContext makes it possible to draw on a ServerBitmap
without the need for a BView, we can replay pictures on app_server side,
avoiding the cost of creating a BBitmap, offscreen BWindow, and BView
from the application side.
The offscreen drawing context gets the same state as the view it's
rendering the picture for, so font size, drawing mode, etc are used.
The implementation is still the suboptimal one, converting the BBitmap
to a BRegion, and using that for clipping. Changing that comes next.
This draws a rect clipped by a text, and a text clipped by a rect. They
should look the same, but they don't because of lack of antialiasing
support in ClipToPicture.
* We should have the buildbots compile this to make sure it still works
* I had to split two ServerApp methods to a searate C++ file to link
them in libtestappserver.so
* some fixes related to the switch to PM and better hybrid support in
jam rules; moving of MIME stuff from registrar to storage kit, merge of
Locale Kit and ICU in libbe, and a few more.
* Modified the test_app_server hwinterface and rdef file so it is not a
background app, and the window isn't floating. Otherwise, hiding the
window would leave you without a way to recover it.
* add option --remote-user to configure, which sets HAIKU_REMOTE_USER
* add evaluation of HAIKU_REMOTE_USER variable when ssh-ing
into git.haiku-os.org
* Handle shifting the contents of the next paragraph into the preceeding
paragraph if text up to and including the line break of the preceeding
paragraph is removed.
* Fixed removing the correct paragraph in the loop when it hits the last
paragraph. (though that code is still untested)
Even for an empty Paragraph, some meaningful bounds can be computed when
it contains an empty TextSpan. However, this is not yet achieved, since
no LineInfo is generated for such an empty Paragraph.
There seem to be a problem with the architecture, when building for an
x86_gcc2 system things are generated in libbe_test/x86, and then fail
with a lot of undefined references. Help welcome.
In order to properly implement ClipToPicture in BView, we need to render
a Picture to a Bitmap. This is currently done client-side, but the
overhead for this (creating a BBitmap that accepts views, including a
window thread, adding a view to it, and rendering the picture to the
view, then sending the result to app_server) isn't acceptable. Moreover,
the bitmap drawn this way is clipped to the view size, and the clipping
won't work when the view is scaled or translated. So, we need to move
the Bitmap creation server-side.
However, app_server currently have no means of doing this. Factor out the
relevant parts of View: a DrawingState stack with PushState/PopState, a
DrawingEngine, and a ConvertToScreen transformation. Another implementation
of the DrawingContext will allow us to also draw a picture directly using
a Painter and low-level pixel buffer, in a format suitable for use as an
AGG alpha mask.