The JSON parser optionally supports interpolation. The code calls it
"escape". Awkward, because it uses the same term for escape sequences
within strings. The latter usage is consistent with RFC 8259 "The
JavaScript Object Notation (JSON) Data Interchange Format" and ISO C.
Call the former "interpolation" instead.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-38-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-37-armbru@redhat.com>
json_parser_parse() normally returns the QObject on success. Except
it returns null when its @tokens argument is null.
Its only caller json_message_process_token() passes null @tokens when
emitting a lexical error. The call is a rather opaque way to say json
= NULL then.
Simplify matters by lifting the assignment to json out of the emit
path: initialize json to null, set it to the value of
json_parser_parse() when there's no lexical error. Drop the special
case from json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-36-armbru@redhat.com>
The classical way to structure parser and lexer is to have the client
call the parser to get an abstract syntax tree, the parser call the
lexer to get the next token, and the lexer call some function to get
input characters.
Another way to structure them would be to have the client feed
characters to the lexer, the lexer feed tokens to the parser, and the
parser feed abstract syntax trees to some callback provided by the
client. This way is more easily integrated into an event loop that
dispatches input characters as they arrive.
Our JSON parser is kind of between the two. The lexer feeds tokens to
a "streamer" instead of a real parser. The streamer accumulates
tokens until it got the sequence of tokens that comprise a single JSON
value (it counts curly braces and square brackets to decide). It
feeds those token sequences to a callback provided by the client. The
callback passes each token sequence to the parser, and gets back an
abstract syntax tree.
I figure it was done that way to make a straightforward recursive
descent parser possible. "Get next token" becomes "pop the first
token off the token sequence". Drawback: we need to store a complete
token sequence. Each token eats 13 + input characters + malloc
overhead bytes.
Observations:
1. This is not the only way to use recursive descent. If we replaced
"get next token" by a coroutine yield, we could do without a
streamer.
2. The lexer reports errors by passing a JSON_ERROR token to the
streamer. This communicates the offending input characters and
their location, but no more.
3. The streamer reports errors by passing a null token sequence to the
callback. The (already poor) lexical error information is thrown
away.
4. Having the callback receive a token sequence duplicates the code to
convert token sequence to abstract syntax tree in every callback.
5. Known bug: the streamer silently drops incomplete token sequences.
This commit rectifies 4. by lifting the call of the parser from the
callbacks into the streamer. Later commits will address 3. and 5.
The lifting removes a bug from qjson.c's parse_json(): it passed a
pointer to a non-null Error * in certain cases, as demonstrated by
check-qjson.c.
json_parser_parse() is now unused. It's a stupid wrapper around
json_parser_parse_err(). Drop it, and rename json_parser_parse_err()
to json_parser_parse().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-35-armbru@redhat.com>
json_lexer_init() takes the function to process a token as an
argument. It's always json_message_process_token(). Makes the code
harder to understand for no actual gain. Drop the indirection.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-34-armbru@redhat.com>
parser_context_new/free() are only used from json_parser_parse(). We
can fold the code there and avoid an allocation altogether.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180719184111.5129-9-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180823164025.12553-33-armbru@redhat.com>
The lexer always returns 0 when char feeding. Furthermore, none of the
caller care about the return value.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20180326150916.9602-10-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180823164025.12553-32-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-31-armbru@redhat.com>
The JSON parser treats each half of a surrogate pair as unpaired
surrogate. Fix it to recognize surrogate pairs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-30-armbru@redhat.com>
The JSON parser translates invalid \uXXXX to garbage instead of
rejecting it, and swallows \u0000.
Fix by using mod_utf8_encode() instead of flawed wchar_to_utf8().
Valid surrogate pairs are now differently broken: they're rejected
instead of translated to garbage. The next commit will fix them.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-29-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-28-armbru@redhat.com>
Both lexer and parser reject invalid escape sequences in strings. The
parser's check is useless.
The lexer ends the token right after the first non-well-formed byte.
This tends to lead to suboptimal error reporting. For instance, input
{"abc\@ijk": 1}
produces the tokens
JSON_LCURLY {
JSON_ERROR "abc\@
JSON_KEYWORD ijk
JSON_ERROR ": 1}\n
The parser then reports three errors
Invalid JSON syntax
JSON parse error, invalid keyword 'ijk'
Invalid JSON syntax
before it recovers at the newline.
Drop the lexer's escape sequence checking, and make it accept the same
characters after backslash it accepts elsewhere in strings. It now
produces
JSON_LCURLY {
JSON_STRING "abc\@ijk"
JSON_COLON :
JSON_INTEGER 1
JSON_RCURLY
and the parser reports just
JSON parse error, invalid escape sequence in string
While there, fix parse_string()'s inaccurate function comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-27-armbru@redhat.com>
Since the JSON grammer doesn't accept U+0000 anywhere, this merely
exchanges one kind of parse error for another. It's purely for
consistency with qobject_to_json(), which accepts \xC0\x80 (see commit
e2ec3f9768).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-26-armbru@redhat.com>
Both the lexer and the parser (attempt to) validate UTF-8 in JSON
strings.
The lexer rejects bytes that can't occur in valid UTF-8: \xC0..\xC1,
\xF5..\xFF. This rejects some, but not all invalid UTF-8. It also
rejects ASCII control characters \x00..\x1F, in accordance with RFC
8259 (see recent commit "json: Reject unescaped control characters").
When the lexer rejects, it ends the token right after the first bad
byte. Good when the bad byte is a newline. Not so good when it's
something like an overlong sequence in the middle of a string. For
instance, input
{"abc\xC0\xAFijk": 1}\n
produces the tokens
JSON_LCURLY {
JSON_ERROR "abc\xC0
JSON_ERROR \xAF
JSON_KEYWORD ijk
JSON_ERROR ": 1}\n
The parser then reports four errors
Invalid JSON syntax
Invalid JSON syntax
JSON parse error, invalid keyword 'ijk'
Invalid JSON syntax
before it recovers at the newline.
The commit before previous made the parser reject invalid UTF-8
sequences. Since then, anything the lexer rejects, the parser would
reject as well. Thus, the lexer's rejecting is unnecessary for
correctness, and harmful for error reporting.
However, we want to keep rejecting ASCII control characters in the
lexer, because that produces the behavior we want for unclosed
strings.
We also need to keep rejecting \xFF in the lexer, because we
documented that as a way to reset the JSON parser
(docs/interop/qmp-spec.txt section 2.6 QGA Synchronization), which
means we can't change how we recover from this error now. I wish we
hadn't done that.
I think we should treat \xFE the same as \xFF.
Change the lexer to accept \xC0..\xC1 and \xF5..\xFD. It now rejects
only \x00..\x1F and \xFE..\xFF. Error reporting for invalid UTF-8 in
strings is much improved, except for \xFE and \xFF. For the example
above, the lexer now produces
JSON_LCURLY {
JSON_STRING "abc\xC0\xAFijk"
JSON_COLON :
JSON_INTEGER 1
JSON_RCURLY
and the parser reports just
JSON parse error, invalid UTF-8 sequence in string
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-25-armbru@redhat.com>
Quiz time! When a parser reports multiple errors, but the user gets
to see just one, which one is (on average) the least useful one?
Yes, you're right, it's the last one! You're clearly familiar with
compilers.
Which one does QEMU report?
Right again, the last one! You're clearly familiar with QEMU.
Reproducer: feeding
{"abc\xC2ijk": 1}\n
to QMP produces
{"error": {"class": "GenericError", "desc": "JSON parse error, key is not a string in object"}}
Report the first error instead. The reproducer now produces
{"error": {"class": "GenericError", "desc": "JSON parse error, invalid UTF-8 sequence in string"}}
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-24-armbru@redhat.com>
We reject bytes that can't occur in valid UTF-8 (\xC0..\xC1,
\xF5..\xFF in the lexer. That's insufficient; there's plenty of
invalid UTF-8 not containing these bytes, as demonstrated by
check-qjson:
* Malformed sequences
- Unexpected continuation bytes
- Missing continuation bytes after start bytes other than
\xC0..\xC1, \xF5..\xFD.
* Overlong sequences with start bytes other than \xC0..\xC1,
\xF5..\xFD.
* Invalid code points
Fixing this in the lexer would be bothersome. Fixing it in the parser
is straightforward, so do that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-23-armbru@redhat.com>
The JSON parser rejects some invalid sequences, but accepts others
without correcting the problem.
We should either reject all invalid sequences, or minimize overlong
sequences and replace all other invalid sequences by a suitable
replacement character. A common choice for replacement is U+FFFD.
I'm going to implement the former. Update the comments in
utf8_string() to expect this.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-22-armbru@redhat.com>
Simplify loop control, and assert that the string ends with the
appropriate quote (the lexer ensures it does).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-21-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-20-armbru@redhat.com>
Fix the lexer to reject unescaped control characters in JSON strings,
in accordance with RFC 8259 "The JavaScript Object Notation (JSON)
Data Interchange Format".
Bonus: we now recover more nicely from unclosed strings. E.g.
{"one: 1}\n{"two": 2}
now recovers cleanly after the newline, where before the lexer
remained confused until the next unpaired double quote or lexical
error.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-19-armbru@redhat.com>
json_lexer[] maps (lexer state, input character) to the new lexer
state. The input character is consumed unless the new state is
terminal and the input character doesn't belong to this token,
i.e. the state transition uses look-ahead. When this is the case,
input character '\0' would result in the same state transition.
TERMINAL_NEEDED_LOOKAHEAD() exploits this.
Except this is wrong for transitions to IN_ERROR. There, the
offending input character is in fact consumed: case IN_ERROR returns.
It isn't added to the JSON_ERROR token, though.
Fix that by making TERMINAL_NEEDED_LOOKAHEAD() return false for
transitions to IN_ERROR.
There's a slight complication. json_lexer_flush() passes input
character '\0' to flush an incomplete token. If this results in
JSON_ERROR, we'd now add the '\0' to the token. Suppress that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-18-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-17-armbru@redhat.com>
RFC 8259 "The JavaScript Object Notation (JSON) Data Interchange
Format" requires control characters in strings to be escaped.
Demonstrate the JSON parser accepts U+0001 .. U+001F unescaped.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-16-armbru@redhat.com>
Some of utf8_string()'s test_cases[] contain multiple invalid
sequences. Testing that qobject_from_json() fails only tests we
reject at least one invalid sequence. That's incomplete.
Additionally test each non-space sequence in isolation.
This demonstrates that the JSON parser accepts invalid sequences
starting with \xC2..\xF4. Add a FIXME comment.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-15-armbru@redhat.com>
The previous commit made utf8_string()'s test_cases[].utf8_in
superfluous: we can use .json_in instead. Except for the case testing
U+0000. \x00 doesn't work in C strings, so it tests \\u0000 instead.
But testing \\uXXXX is escaped_string()'s job. It's covered there.
Test U+0001 here, and drop .utf8_in.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-14-armbru@redhat.com>
utf8_string() tests only double quoted strings. Cover single quoted
strings, too: store the strings to test without quotes, then wrap them
in either kind of quote.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-13-armbru@redhat.com>
simple_string() and single_quote_string() have become redundant with
escaped_string(), except for embedded single and double quotes.
Replace them by a test that covers just that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-12-armbru@redhat.com>
Cover escaped single quote, surrogates, invalid escapes, and
noncharacters. This demonstrates that valid surrogate pairs are
misinterpreted, and invalid surrogates and noncharacters aren't
rejected.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-11-armbru@redhat.com>
Merge a few closely related test strings, and drop a few redundant
ones.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-10-armbru@redhat.com>
escaped_string() first tests double quoted strings, then repeats a few
tests with single quotes. Repeat all of them: store the strings to
test without quotes, and wrap them in either kind of quote for
testing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-9-armbru@redhat.com>
To permit recovering from arbitrary JSON parse errors, the JSON parser
resets itself on lexical errors. We recommend sending a 0xff byte for
that purpose, and test-qga covers this usage since commit 5229564b83.
That commit had to add an ugly hack to qmp_fd_vsend() to make capable
of sending this byte (it's designed to send only valid JSON).
The previous commit added a way to send arbitrary text. Put that to
use for this purpose, and drop the hack from qmp_fd_vsend().
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-8-armbru@redhat.com>
qmp-test neglects to cover QMP input that isn't valid JSON. libqtest
doesn't let us send such input. Add qtest_qmp_send_raw() for this
purpose, and put it to use in qmp-test.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-7-armbru@redhat.com>
[Commit message typo fixed]
qmp-test is for QMP protocol tests. Commit e4a426e75e added generic,
basic tests of query commands to it. Move them to their own test
program qmp-cmd-test, to keep qmp-test focused on the protocol.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-6-armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-5-armbru@redhat.com>
qobject_from_json() can return null without setting an error on
lexical errors. I call that a bug. Add test coverage to demonstrate
it.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-4-armbru@redhat.com>
qobject_from_json() & friends misbehave when the JSON text has more
than one JSON value. Add test coverage to demonstrate the bugs.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-3-armbru@redhat.com>
Section "QGA Synchronization" specifies that sending "a raw 0xFF
sentinel byte" makes the server "reset its state and discard all
pending data prior to the sentinel." What actually happens there is a
lexical error, which will produce one or more error responses.
Moreover, it's not specific to QGA.
Create new section "Forcing the JSON parser into known-good state" to
document the technique properly. Rewrite section "QGA
Synchronization" to document just the other direction, i.e. command
guest-sync-delimited.
Section "Protocol Specification" mentions "synchronization bytes
(documented below)". Delete that.
While there, fix it not to claim '"Server" is QEMU itself', but
'"Server" is either QEMU or the QEMU Guest Agent'.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180823164025.12553-2-armbru@redhat.com>
Add definition of the first nanoMIPS processor in QEMU.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Fix passing argument for nanoMIPS bare metal related to the
semihosting regime.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Setup the GT64120 BARs in the nanoMIPS bootloader, in the same way that
they are setup in the MIPS32 bootloader. This is necessary for Linux to
be able to access peripherals, including the UART.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Paul Burton <pburton@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Add basic nanoMIPS boot code for Malta.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
nanoMIPS is always NaN2008 compliant, and rules for checking
FCR31's NAN2008 bit are obsoleted.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Modify load_elf32()/load_elf64() to treat EM_NANOMIPS as legal as
EM_MIPS is.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Starting from nanoMIPS introduction, machine variant can be
EM_MIPS or EM_NANOMIPS.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Value 249 is registered as valid for usage for nanoMIPS executables.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Fix ERET/ERETNC so that ADEL exception can be raised.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
Update BadInstr and BadInstrX registers for nanoMIPS. The same
support for pre-nanoMIPS remains unimplemented.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Yongbok Kim <yongbok.kim@mips.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>
A set of nanoMIPS instructions is not available if Config5 bit NMS
is set.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Add emulation of DSP ASE instructions for nanoMIPS - part 6.
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Signed-off-by: Stefan Markovic <smarkovic@wavecomp.com>