- Each PowerMonitor now exports a set of descriptors rather than
only a single one. This allows e.g. PowerButtonMonitor to watch
multiple possible button event sources rather than one.
This gets power button working again on some hardware that exports
multiple ACPI power buttons, but offers no obvious way to determine
which one is actually active/being used. In the long term though, it'd
be nice to have a well-defined kernel power event interface that an app
could subscribe to, rather than having to watch individual devfs
descriptors.
Thanks to tqh and korli for advice/review.
- On some hardware, both the fixed function FADT as well as the
device-based power interfaces are present. In such a case we
would fail to publish one or the other, depending on which was
enumerated first, since we'd always attempt to publish the same
name regardless. Now we differentiate the device name for fixed vs
device.
- Only enable fixed function for actual fixed devices.
- Improve tracing.
This still uses the request I had prepared earlier, not the new bulk
request which can get information for the entire package list in one
request. It also has no caching, so it runs for quite a while in the
background (dedicated thread, so hopefully no harm done by keeping it
enabled).
Thanks Augustin, for the parser!
Based on an earlier piece of source code of mine that parsed JSON into
QObjects, this JSON parser creates a BMessage tree.
Will be used by Stephan in HaikuDepot for communication with the web app.
- Kudos to Marcus Overhagen for laying out the general idea of automatic
detection by sharing some of his dvb code examples with me.
- Automatically detect the audio frame rate, channel count and sample format.
- Share audio sample format conversion code between AVFormatReader and
AVCodecDecoder.
- Tested with several video and audio files via MediaPlayer.
- Tested also with test case mp3_decoder_test -after- removing the hard coded
audio decoding parameters. Although the test shows that auto detection is
working (via stepping through the auto detection code path) the complete test
is still failing, due to missing implementation of incomplete audio frame
decoding.
- Add and update the documentation accordingly.
- Main purpose is to prepare auto detection of audio frame properties for
media formats that encode those properties in the frames themself (e.g. MP3)
instead of in the container format (e.g. WMA).
The main difference between akin named methods _DecodeNextAudioFrame() and
_DecodeNextAudioFrameChunk() is that the former method deals with providing
the exact number of audio frames expected by the caller of
BMediaDecoder::Decode() and the latter deals with decoding any number of
audio frames at all.
- New documentation added and existing documentationupdated accordingly.
- No functional change intended.
- Some small refactoring when resetting fRawDecodedAudio. Instead of letting
FFMPEG reset fRawDecodedAudio we do it manually to preserve the allocated
memory in fRawDecodedAudio->opaque (otherwise FFMPEG's
avcodec_get_frame_defaults() would NULLify the opaque pointer without
releasing the allocated memory.
- Keep track of the total size of fDecodedData in fRawDecodedAudio->linesize[0]
instead of relying on calculating it every time it is needed. This makes the
code more comprehensible.
- Main reason for this refactoring is to increase readability and thus make
audio decode path more comprehensible.
- Added documentation for the new method accordingly.
- Small change in calculating the decoded data size to clear when error occurs
during decoding. This way it is more readable and more consistent with
calculations of decoded data size on other locations.
- No functional change intended.
- Main reason for this refactoring is to increase readability and thus make the
audio decode path more comprehensible.
- Added documentation for the new method accordingly.
- No functional change intended.
- Main reasons are to increase readability of audio path and to demonstrate
that chunk loading in audio and video path is the same code that can be
focused in one method (instead of two at the moment). Added a TODO for
collapsing both methods into one and the conditions that must hold true to
do so (just in case I'll be hitted by a bus and someone else has to proceed).
Collapsing is scheduled for a later commit.
- Added documentation for the new method accordingly.
- Make use of full line length in comments of
_LoadNextVideoChunkIfNeededAndAssignStartTime().
- No functional change intended.
- Main reason for this refactoring is to increase readability and thus make the
audio decode path more comprehensible.
- Added documentation for the new method accordingly.
- Small refactoring for detecting when to update fRawDecodedAudio's properties.
This is a preparation step for factoring out the flushing of the
fDecodedDataBuffer in a later commit.
- No functional change intended.
* Use a BTextView for the "no preview" text again, as Skipp_OSX pointed
this allows it to word wrap as needed with any font bigger than 10pt.
* Show a black screen rather than the "no preview" text for Darkness and
when a screensaver fails to load. This matches what screen_blanker will
do.
The BSD grep doesn't know about \s. Moreover, checking for elf (rather
than ELF) seems to make more sense, as that's the format name, not part
of the description.
Patch suggested by geist. Thanks!
- FFMPEG handles the relationship of start time between encoded and decoded
audio data now by using the fTempPacket->dts and the
fDecodedDataBuffer->pkt_dts fields. We still have to manually keep track of
start times for consecutive audio frames though to support returning a number
of audio frames that may assembled of partial AVFrames.
- The start time of the very first audio frame data packet returned by Decode()
is now correctly calculated based on GetNextChunk() start times instead of
being always zero.
- Introduce fRawDecodedAudio that serves as a container to store properties of
the audio frames stored in fDecodedData. This prepares the population of the
fHeader structure with audio frame properties needed to allow clients of
BMediaDecoder::Decode() detect audio format changes in a later commit.
- Remove fStartTime as it is superflous now.
- The reason for compiler complaining about "INT64_C is not defined here" is
gone since the addition of the compiler flag "-D__STDC_CONSTANT_MACROS"
to the Jamfile some time ago. This flag allows C++ to use C99 math features.
- No functional change intended.
- Also change what is printed for video frames. Currently both
debug_fframe_[audio|video] are used in AVCodecDecoder only and thus are
streamlined for their usage there. For example we print the AVFrame.pkt_dts
field instead of the AVFrame.pkt field because the later one is never touched
by AVCodecDecoders usage of the FFMPEG library.
Note: AVFrame.pkt being never touched means that it always contains the value
AV_NOPTS_VALUE making it less useful for debug purposes.
The packages are the bootstrap ones, modified with the "unbootstrap"
script. Not recommended for real use, but this should make playing with
the ARM build a bit simpler.
The libsolv package somehow got lost in the process when I converted
those. Anyone with a copy of the libsolv_bootstrap packages in their
arm generated folder is welcome to "unbootstrap" and upload it.
- There are two main reasons for this refactoring:
1. Prepare using FFMPEGs functionality of audio frame start time assignment
(instead of rolling it ourself) like already done for the video path
(see _LoadNextVideoChunkIfNeededAndAssignStartTime() for reference).
2. Get rid of fChunkBufferOffset (this is a minor reason though).
- Untangle some of the conditional checks to increase readability.
- No functional change intended.
- First method is solely responsible to fill the audio output buffer with
already decoded audio frames.
Second method is solely responsible for decoding the encoded audio data and
put it in the decoded audio output buffer for further processing with the
first method.
This prepares auto detection of audio frame properties for audio formats
where the properties are contained within the encoded audio frame (e.g. MP3),
instead within the audio container format (e.g. WMA). Implementing auto
detection is scheduled for a later commit though.
- Added documentation accordingly.
- No functional change intended.
- Use name that correctly reflects the return value of avcodec_decode_video2().
- Make DO_PROFILING code path of AVCodecDecoder compile again.
- No functional change intended.
As suggested by akshay, there is no reason to do this only for control
transfers. All input transfers can have short packets and we want to
detect those and trigger the "end of transfer" code when a short packet
happens.
Fixes#11087.
- This makes the video output looks more visual appealing. Without bilinear
filtering you would see aliasing artifacts all over the place. Now it looks
more harmonic.
- This get rids of the complain "'UINT64_C' was not declared in this scope" and
allows us to remove the (now superflous) declaration of UINT64_C.
- No functional change intended.
- This should fix the bug where video files that played well before the recent
changes to the FFMPEG Plugin didn't play anymore. Now we apply the essential
video container properties (that were passed by with Setup()) to the
AVCodecContext. Some video formats simply store those properties in the
container only (e.g. AVI, WMV) and not in the video frames itself
(e.g. MPEG2).
Tested with several files from samples.ffmpeg.org and from the FATE suite of
FFMPEG.