RSA-PSS in X.509 is truly horrible, and OpenSSL does not expose very good APIs
to extract this, even though the library does handle it internally. Instead, we
must tediously unwrap RFC 4055's unnecessarily complicated encoding of
RFC 8017's unnecessarily flexible RSA-PSS definition.
This reverts commit 00baf58a71. That
change appears to have been incorrect. It's described as simplying
retrieving the "default signature digest", but it actually changed the
function's behavior entirely. The function wasn't retrieving defaults
previously.
A certificate contains, among other things, a public key and a
signature. The public key is the public key of the subject. However, the
signature was generated by the issuer. That is, if I get a certificate
from a CA, the public key will be my public key and the signature will
be my CA's signature over the certificate contents.
Now, the original code returned the digest used in the certificate's
signature. That is, it tells you which signature algorithm did my *CA*
use to sign my certificate.
The new code extracts the certificate's public key (my public key, not
the CA's). This doesn't necessarily tell you the signature algorithm, so
it then asks OpenSSL what the "default" signature algorithm would it use
with the key. This notion of "default" is ad-hoc and has changed over
time with OpenSSL releases. It doesn't correspond to any particular
protocol semantics. It's not necessarily the signature algorithm of the
certificate.
Now, looking at where this function is used, it's called by
freerdp_certificate_get_signature_alg, which is called by
tls_get_channel_binding to compute the tls-server-end-point channel
binding. That code cites RFC 5929, which discusses picking the hash
algorithm based on the certificate's signatureAlgorithm:
https://www.rfc-editor.org/rfc/rfc5929#section-4.1
That is, the old version of the code was correct and the
"simplification" broke it. Revert this and restore the original version.
I suspect this went unnoticed because, almost all the time, both the old
and new code picked SHA-256 and it was fine. But if the certificate was,
say, signed with SHA-384, the new code would compute the wrong channel
binding.
The RDS AAD Auth PDUs have no packet headers to indicate length.
Instead, these packets are zero-terminated strings. Somehow, Windows
accepts Authentication Request PDUs without a terminating null byte
during regular connections, but not through WVD websocket gateways.
If the peer state machine is in state
CONNECTION_STATE_CAPABILITIES_EXCHANGE_MONITOR_LAYOUT properly check for
available data. If a PDU was received in this state it is an out of
sequence PDU (that might happen during deactivation/reactivation) and
must be parsed.
* FreeRDP_WTSVirtualChannelWrite might be called from different threads,
so lock the function execution to keep split packets in order
* unify DVC and SVC channel creation/deletion to avoid duplicate code
The client must handle graphics updates in EndPaint.
If we already reached BeginPaint again reset the invalidated regions
as they are already processed and start anew. Fixes#9672
The previous code was assuming that the host name used for doing AAD was
ServerHostname parameter. But when you connect directly to Azure hosts you most
likely connect by IP and use short name for the AAD host, so you need to be able
to give ServerHostname=<IP of host> and AadServerHostname=<shortname>.
The compiler may complain with a 'implicit conversion changes
signedness' warning. Get rid of these warnings by explicitly
casting the respective values before shifting them.