* This is a header used by several parts of the code which should not
need to know about ELF symbol overriding and the fact that it is
optional.
* When the define is set, the methods will not be defined, but they
shouldn't be called, either
* This makes sure the memory layout of the class stays the same with the
define set or unset, and users can rely on it.
Fixes UnitTester on gcc2.
Tested against wget, curl, and git, which all were still able
to verify certificates and download from HTTPS sites.
Signed-off-by: Oliver Tappe <zooey@hirschkaefer.de>
We declare most of the XDG environment variables for this spec:
- XDG_CONFIG_HOME
- XDG_DATA_HOME
- XDG_CONFIG_DIRS
- XDG_DATA_DIRS
- XDG_CACHE_HOME
I'm not yet sure what to do with XDG_RUNTIME_DIR.
* Remove unneeded field fOutputHeaders and convert it to a local for the
only method that uses it,
* Don't return EOVERFLOW when flushing data from ZLib (the ZLib
decompressor returns this, but zlib docs states that this is NOT an
error condition).
* Replace unneeded temporary BNetBuffer of fixed size with BStackOrHeapArray.
* Haiku does not currently provide crtbeginS.o and crtendS.o, so
we fall back to crtbegin.o and crtend.o.
This should not have any ill-effects, as the available compilers on
Haiku do not use __cxa_atexit() yet.
* Gcc is now using __cxa_atexit, so we need to use the crtbegin
and crtend implementations that are meant to be used for shared
libraries. This avoids crashes of servers that load add-ons
(Media-Addon-Server and Print-Server) when shutting down Haiku.
* As executable are shared on Haiku, we use crtbeginS.o and crtendS.o
for those, too.
* To simplify, we even use crtbeginS.o and crtendS.o in the kernel,
but there they don't currently make a difference, as the respective
initialization and cleanup functions are not being invoked by the
kernel.
- We let FFMPEG keep track of the correct relationship between presentation
start time of the encoded video frame and the resulting decoded video frame.
This simplyfies our code, meaning less lines of code to maintain :)
- Update documentation and pointing out some corner cases when calculating the
correct presentation start time of a decoded video frame under certain
circumstances.
- Fix doxygen: Use doxygen style instead of javadoc style.
- No functional change intended.
Signed-off-by: Colin Günther <coling@gmx.de>
- Main purpose is to make reading the function DecodeNextFrame() easier on the
eyes, by moving out auxiliary code.
Note: The media_header update code for the start_time is still left in
DecodeNextFrame(). This will be addressed in a later commit specially
targetted on handling start_time calculations for incomplete video frames.
- Also updated / added some documentation.
- No functional change intended.
Signed-off-by: Colin Günther <coling@gmx.de>
- This commit makes the mpeg2_decoder_test successfully decode the test video
into 84 consecutive PNG images, yeah :)
- If this commit broke playing video files for you please file a bug report.
I've tested only with one video file (big_buck_bunny_720p_stereo.ogg) that
everything still works.
- The implementation has some shortcomings though, that will be addressed with
some later commits:
1. Start time of media header is wrongly calculated. At the moment we are
using the start time of the first encoded data chunk we read via
GetNextChunk(). This works only for chunk that contain one and exactly
one frame, but not for chunks that contain the end or middle of a frame.
2. Fields of the media header aren't updated when there is a format change
in the middle of the video stream (for example the pixel aspect ratio
might change in the middle of a DVB video stream (e.g. switch from 4:3
to 16:9)).
- Also fix a potential bug, where the CODEC_FLAG_TRUNCATED flag was always
set, due to missing brackets.
Signed-off-by: Colin Günther <coling@gmx.de>
- It is just one flag that needs to be set, so that streaming video data can be
handled by the FFMPEG library.
- For reference: This flag is based on FFMPEG's 0.10.2 video decode example
(doc/example/decoding_encoding.c).
- The _DecodeNextVideoFrame() method needs to be adjusted (still to come), to
take streamed data into account. So the flag on its own doesn't help, but it
is a reasonable step in that direction.
Signed-off-by: Colin Günther <coling@gmx.de>
- Factor out the deinterlacing and color converting part to make the code more
readable. This makes it easier to understand which code belongs to the actual
decoding process and which code to the post processing.
- There seems to be no performance impact involved (I just looked at the spikes
of the process manager) in factoring out this part, but one can always inline
the method if a closer performance assesment (e.g. by enabling the profiling
the existing profiling code) suggests so.
- Document the _DecodeVideo() method a little bit. Maybe someone can document
the info parameter, as I'm a little bit clueless here.
- No functional change intended.
Signed-off-by: Colin Günther <coling@gmx.de>
(cherry picked from commit c5fa095fa73d47e75a46cfc138a56028fcc01819)