Starting with cmake 2.8.10 FreeRDP exports a cmake find module. With 2.8.12
the PRIVATE/PUBLIC keywords were introduced in cmake. When building with
2.8.11 it is not possible to mark link dependencies as private and
therefore they need to be exported.
With this commit the "exported" components (usable with pkg-config and
cmake find module package)
* winpr - winpr library and headers
* freerdp - core library and headers
* freerdp-client - client specific library
* freerdp-server - server specific library
* rdtk - rdtk headers and library
To allow the installation of multiple different version (different major
number) the include files were moved into the respective sub folder:
freerdp -> freerdp{MAJOR}/freerdp (currently freerdp2/freerdp/)
winpr -> winpr{MAJOR}/winpr (currently winrp1/winpr/)
rdtk -> rdpk{MAJOR}/rdtk (currently rdtk0/rdtk/
The generated pkg-config and cmake find modules now also include the major
version number. Currently the following pkg-config are generated and
installed.
* winpr1
* freerdp2
* freerdp-server2
* freerdp-client2
* rdtk0
As cmake is able to handle multiple versions out of the box the
following can be used to find a specific module:
find_package(WinPR)
find_package(FreeRDP)
find_package(FreeRDP-Server)
find_package(FreeRDP-Client)
find_package(RdTk)
As cmake doesn't automatically resolve dependencies for packages it is
necessary to manually include the requirements. For example if
FreeRDP-Client is required WinPR and FreeRDP need to be included
(find_package) as well.
This commit also fixes the installation when STATIC_CHANNELS are built.
WITH STATIC_CHANNELS all channels are linked into libfreerdp-client, for
this all channels are generated as linker archive and linked together in
the final step. Before the intermediate linker archives were, although
not required and useful, installed. Same applies for server side
channels.
build-config.h should contain configure/compile time settings that are
relevant for projects that use FreeRDP.
For example the compiled in plugin search paths.
- Use autovideosink - xvimagesink does not work with cards with no xv ports available and cant be used if wanted to use the fluendo hardware accelerated playback codec
- Use autoaudiosink - let gstreamer choose best audio playback plugin
- Catch when autosinks add known elements so that we can manipulate properties on them
- Adjust caps of various media types to work better with gstreamer, some codecs are picky about having certain fields available
- Remove unneeded plugins such as "ffmpegcolorspace" and "videoscale" - these do not work correctly with fluendo hardware accelerated playback codec
- Name audio/video gstreamer elements better for easier debugging
- Update gstreamer pipeline and element properties to handle playback better
- Detect when valid timestamps are available for buffer from server and try to account for when they are not valid
- Start time is much more reliable then end time from server for various media formats, so use it when possible to make decisions instead of end time
- Do not rebuild gstreamer pipeline for a seek(very expensive), instead reset gstreamer time to 0 and maintain offset between real time and gstreamer time
- Change buffer filled function back to a buffer level function, so that we can use buffer level to make better choices above gstreamer decoder in tsmf
- Remove ack function from gstreamer, instead rely on ack thread to handle acks
- Rework X11 gstreamer code to handle various videosinks which implement the XOverlayInterface and to keep more detailed information on the sub-window that is used for display
- Add check to see if a decoder is available for telling the server the client various media types
- Add in support for M4S2 and WMA1 media types
- Fix flush message handling, they are for individual streams and not the entire presentation
- Delay eos response to try to allow more time for buffers to be loaded into decoder, as we anticipate acks to server and the server will issue stop as soon as we ack eos.
- Fix issue with geometry info being ignored when resent for new streams within existing presentation
- Fixed volume level initialization issue when a stream is stopped and restarted
- Attempt to sync video/audio streams...because we run two different gstreamer pipelines - they can enter pause/playing states at different times and are thus not synchronized. Attempt
to adjust video buffer timestamps based on difference between audio/video running time to account for this difference. This logic accounts for a huge improvement in audio/video sync(ie. lip sync to words)