In statement expression we really mustn't emit backward jumps
under nocode_wanted (they will form infinte loops as no expressions
are evaluated). Do-while and explicit loop with gotos weren't
handled.
This happens when e.g. string constants (or other static data)
are passed as operands to inline asm as immediates. The produced
symbol ref wouldn't be found. So tighten the connection between
C and asm-local symbol table even more.
Our preprocessor throws away # line-comments in asm mode.
It did so also inside preprocessor directives, thereby
removing stringification. Parse defines in non-asm mode (but
retain '.' as identifier character inside macro definitions).
The return value of statement expressions might refer to local
symbols, so those can't be popped. The old error message always
was just a band-aid, and since disabling it for pointer types it
wasn't effective anyway. It also never considered that also the
vtop->sym member might have referred to such symbols (see the
testcase with the local static, that used to segfault).
For fixing this (can be seen better with valgrind and SYM_DEBUG)
simply leave local symbols of stmt exprs on the stack.
The include directive needs to be parsed as pp-tokens, not
as token (i.e. no conversion to TOK_STR or TOK_NUM). Also fix
parsing computed includes using quoted strings.
That, as well as "sym = expr", if expr contains symbols.
Slightly tricky because a definition from .set is overridable,
whereas proper definitions aren't.
This doesn't yet allow using this for override tricks from C
and global asm blocks because the symbol tables from C and asm
are separate.
But like GCC do warn about changes in signedness. The latter
leads to some changes in gen_assign_cast to not also warn about
unsigned* = int*
(where GCC warns, but only with extra warnings).
This requires correctly handling the REX prefix.
As bonus we now also support the four 8bit registers
spl,bpl,sil,dil, which are decoded as ah,ch,dh,bh in non-long-mode
(and require a REX prefix as well).
For
union U { struct {int a,b}; int c; };
union U u = {{ 1, 2, }};
The unnamed first member of union U needs to actually exist in the
structure so initializer parsing isn't confused about the double braces.
That means also the a and b members must be part of _that_, not of
union U directly. Which in turn means we need to do a bit more work
for field lookup.
See the testcase extension for more things that need to work.
Remove dead code and variables. Properly check for unions when
skipping fields in initializers. Make tests2/*.expect depend
on the .c files so they are automatically rebuilt when the latter
change.
E.g. "struct { struct S s; int a;} = { others, 42 };"
if 'others' is also a 'struct S'. Also when the value is a
compound literal. See added testcases.
This snippet is valid:
void foo(void);
... foo + 42 ...
the function designator is converted to pointer to function
implicitely. gen_op didn't do that and bailed out.
This must compile:
typedef int arrtype1[];
arrtype1 sinit19 = {1};
arrtype1 sinit20 = {2,3};
and generate two arrays of one resp. two elements. Before the fix
the determined size of the first array was encoded in the type
directly, so sinit20 couldn't be parsed anymore (because arrtype1
was thought to be only one element long).
lar can accept multiple sizes as well (wlx), like lsl. When using
autosize it's important to look at the destination operand first;
when it's a register that one determines the size, not the input
operand.
Given this code:
struct __attribute__((...)) Name {...};
TCC was eating "Name", hence generating an anonymous struct.
It also didn't apply any packed attributes to the parsed
members. Both fixed. The testcase also contains a case
that isn't yet handled by TCC (under a BROKEN #define).
In particular subtracting a defined symbol from current section
makes the value PC relative, and .org accepts symbolic expressions
as well, if the symbol is from the current section.
When tokens in macro definitions need cstr_buf inside get_tok_str,
the second might overwrite the first (happens when tokens are
multi-character non-identifiers, see testcase) in macro_is_equal,
failing to diagnose a difference. Use a real local buffer.
Now we can express prefixes with 0x0fxx opcodes we can correct the
movq mem64->xmm opcode, and restrict the movq xmm->mem64 movq to
not invalidly accept mmx.
In particular those that are extensions of existing mmx (or sse1)
instructions by a simple 0x66 prefix. There's one caveat for
x86-64: as we don't yet correctly handle the 0xf3 prefix
the movq mem64->xmm is wrong (tested in asmtest.S). Needs
some refactoring of the instr_type member.
- generate and use SYM@PLT for plt addresses
- get rid of patch_dynsym_undef hack (no idea what it did on FreeBSD)
- use sym_attrs instead of symtab_to_dynsym
- special case for function pointers into .so on i386
- libtcc_test: test tcc_add_symbol with data object
- move target specicic code to *-link.c files
- add R_XXX_RELATIVE (needed for PE)
The problem was with tcctest.c:
unsigned set;
__asm__("btsl %1,%0" : "=m"(set) : "Ir"(20) : "cc");
when with tcc compiled with the HAVE_SELINUX option, run with
tcc -run, it would use large addresses far beyond the 32bits
range when tcc did not use the pc-relative mode for accessing
'set' in global data memory. In fact the assembler did not
know about %rip at all.
Changes:
- memory operands use (%rax) not (%eax)
- conversion from VT_LLOCAL: use type VT_PTR
- support 'k' modifier
- support %rip register
- support X(%rip) pc-relative addresses
The test in tcctest.c is from Michael Matz.
Previously in order to perform a ll+ll operation tcc
was trying to 'lexpand' PTR in gen_opl which did
not work well. The case:
int printf(const char *, ...);
char t[] = "012345678";
int main(void)
{
char *data = t;
unsigned long long r = 4;
unsigned a = 5;
unsigned long long b = 12;
*(unsigned*)(data + r) += a - b;
printf("data %s\n", data);
return 0;
}
Also:
- regenerate all tests/pp/*.expect with gcc
- test "insert one space" feature
- test "0x1E-1" in asm mode case
- PARSE_FLAG_SPACES: ignore \f\v\r better
- tcc.h: move some things
tccgen.c:gv() when loading long long from lvalue, before
was saving all registers which caused problems in the arm
function call register parameter preparation, as with
void foo(long long y, int x);
int main(void)
{
unsigned int *xx[1], x;
unsigned long long *yy[1], y;
foo(**yy, **xx);
return 0;
}
Now only the modified register is saved if necessary,
as in this case where it is used to store the result
of the post-inc:
long long *p, v, **pp;
v = 1;
p = &v;
p[0]++;
printf("another long long spill test : %lld\n", *p);
i386-gen.c :
- found a similar problem with TOK_UMULL caused by the
vstack juggle in tccgen:gen_opl()
(bug seen only when using EBX as 4th register)
This was causing assembler bugs in a tcc compiled by itself
at i386-asm.c:352 when ExprValue.v was changed to uint64_t:
if (op->e.v == (int8_t)op->e.v)
op->type |= OP_IM8S;
A general test case:
#include <stdio.h>
int main(int argc, char **argv)
{
long long ll = 4000;
int i = (char)ll;
printf("%d\n", i);
return 0;
}
Output was "4000", now "-96".
Also: add "asmtest2" as asmtest with tcc compiled by itself
On 2016-08-11 09:24 +0100, Balazs Kezes wrote:
> I think it's just that that copy_params() never restores the spilled
> registers. Maybe it needs some extra code at the end to see if any
> parameters have been spilled to stack and then restore them?
I've spent some time on this and I've found an alternative solution.
Although I'm not entirely sure about it but I've attached a patch
nevertheless.
And while poking at that I've found another problem affecting the
unsigned long long division on arm and I've attached a patch for that
too.
More details in the patches themselves. Please review and consider them
for merging! Thank you!
--
Balazs
[PATCH 1/2] Fix slow unsigned long long division on ARM
The macro AEABI_UXDIVMOD expands to this bit:
#define AEABI_UXDIVMOD(name,type, rettype, typemacro) \
...
while (num >= den) { \
...
while ((q << 1) * den <= num && q * den <= typemacro ## _MAX / 2) \
q <<= 1; \
...
With the current ULONG_MAX version the inner loop goes only until 4
billion so the outer loop will progress very slowly if num is large.
With ULLONG_MAX the inner loop works as expected. The current version is
probably a result of a typo.
The following bash snippet demonstrates the bug:
$ uname -a
Linux eper 4.4.16-2-ARCH #1 Wed Aug 10 20:03:13 MDT 2016 armv6l GNU/Linux
$ cat div.c
int printf(const char *, ...);
int main(void) {
unsigned long long num, denom;
num = 12345678901234567ULL;
denom = 7;
printf("%lld\n", num / denom);
return 0;
}
$ time tcc -run div.c
1763668414462081
real 0m16.291s
user 0m15.860s
sys 0m0.020s
[PATCH 2/2] Fix long long dereference during argument passing on ARMv6
For some reason the code spills the register to the stack. copy_params
in arm-gen.c doesn't expect this so bad code is generated. It's not
entirely clear why the saving part is necessary. It was added in commit
59c35638 with the comment "fixed long long code gen bug" with no further
clarification. Given that tcctest.c passes without this, maybe it's no
longer needed? Let's remove it.
Also add a new testcase just for this. After I've managed to make the
tests compile on a raspberry pi, I get the following diff without this
patch:
--- test.ref 2016-08-22 22:12:43.380000000 +0100
+++ test.out3 2016-08-22 22:12:49.990000000 +0100
@@ -499,7 +499,7 @@
2
1 0 1 0
4886718345
-shift: 9 9 9312
+shift: 291 291 291
shiftc: 36 36 2328
shiftc: 0 0 9998683865088
manyarg_test:
More discussion on this thread:
https://lists.nongnu.org/archive/html/tinycc-devel/2016-08/msg00004.html
Except
- that libtcc1.a is now installed in subdirs i386/ etc.
- the support for arm and arm64
- some of the "Darwin" fixes
- tests are mosly unchanged
Also
- removed the "legacy links for cross compilers" (was total mess)
- removed "out-of-tree" build support (was broken anyway)
- from win32/include/winapi: various .h
The winapi header set cannot be complete no matter what. So
lets have just the minimal set necessary to compile the examples.
- remove CMake support (hard to keep up to date)
- some other files
Also, drop useless changes in win32/lib/(win)crt1.c
fixes 5c35ba66c5
Implementation was consistent within tcc but incompatible
with the ABI (for example library functions vprintf etc)
Also:
- tccpp.c/get_tok_str() : avoid "unknown format "%llu" warning
- x86_64_gen.c/gen_vla_alloc() : fix vstack leak
There were two errors in the arithmetic imm8 instruction. They accept
only REGW, and in case the user write a xxxb opcode that variant
needs to be rejected as well (it's not automatically rejected by REGW
in case the destination is memory).
Two things: negative constants were rejected (e.g. "add $-15,%eax").
Second the insn order was such that the arithmetic IM8S forms
weren't used (always the IM32 ones). Switching them prefers those
but requires a fix for size calculation in case the opcodes were
OPC_ARITH and OPC_WLX (whose size starts with 1, not zero).
Fix it to actually be able to parse 64bit immediates (enlarge
operand value type). Then, generally there's no need for accepting
IM64 anywhere, except in the 0xba+r mov opcodes, so OP_IM is
unnecessary, as is OPT_IMNO64. Improve the generated code a bit
by preferring the 0xc7 opcode for im32->reg64, instead of the
im64->reg64 form (which we therefore hardcode).
A bag of assembler fixes, to be either compatible with GAS
(e.g. order of 'test' operands), accept more instructions,
count correct foo{bwlq} variants on x86_64, fix modrm/sib bytes
on x86_64 to not use %rip relative addressing mode, to not use
invalid insns in tests/asmtest.S for x86_64.
Result is that now output of GAS and of tcc on tests/asmtest.S
is mostly the same.
Insert a space when it is required to prevent mistokenisation of
the output, and also in a few cases where it is not strictly
required, imitating GCC's behaviour.
* correct -E output for the case ++ + ++ concatenation
do this only for expanded from macro string
and only when tcc_state->output_type == TCC_OUTPUT_PREPROCESS
From gcc docs: "You may also specify attributes between the enum, struct or union tag and the name of the type rather than after the closing brace."
Adds `82_attribs_position.c` in `tests/tests2`
made like in pcc
(pcc.ludd.ltu.se/ftp/pub/pcc-docs/pcc-utf8-ver3.pdf)
We treat all chars with high bit set as alphabetic.
This allow code like
#include <stdio.h>
int Lefèvre=2;
int main() {
printf("Lefèvre=%d\n",Lefèvre);
return 0;
}
modified version of the old one which don't allow '.'
in #define Identifiers. This allow correctly preprocess
the following code in *.S
#define SRC(y...) \
9999: y; \
.section __ex_table, "a"; \
.long 9999b, 6001f ; \
// .previous
SRC(1: movw (%esi), %bx)
6001:
A test included.
the check on incomplete struct/union/enum types was too early,
disallowing mixed specifiers and qualifiers. Simply rely on
the size (->c) field for that. See testcases.
* Documentation is now in "docs".
* Source code is now in "src".
* Misc. fixes here and there so that everything still works.
I think I got everything in this commit, but I only tested this
on Linux (Make) and Windows (CMake), so I might've messed
something up on other platforms...
Jsut for testing. It works for me (don't break anything)
Small fixes for x86_64-gen.c in "tccpp: fix issues, add tests"
are dropped in flavor of this patch.
Pip Cet:
Okay, here's a first patch that fixes the problem (but I've found
another bug, yet unfixed, in the process), though it's not
particularly pretty code (I tried hard to keep the changes to the
minimum necessary). If we decide to actually get rid of VT_QLONG and
VT_QFLOAT (please, can we?), there are some further simplifications in
tccgen.c that might offset some of the cost of this patch.
The idea is that an integer is no longer enough to describe how an
argument is stored in registers. There are a number of possibilities
(none, integer register, two integer registers, float register, two
float registers, integer register plus float register, float register
plus integer register), and instead of enumerating them I've
introduced a RegArgs type that stores the offsets for each of our
registers (for the other architectures, it's simply an int specifying
the number of registers). If someone strongly prefers an enum, we
could do that instead, but I believe this is a place where keeping
things general is worth it, because this way it should be doable to
add SSE or AVX support.
There is one line in the patch that looks suspicious:
} else {
addr = (addr + align - 1) & -align;
param_addr = addr;
addr += size;
- sse_param_index += reg_count;
}
break;
However, this actually fixes one half of a bug we have when calling a
function with eight double arguments "interrupted" by a two-double
structure after the seventh double argument:
f(double,double,double,double,double,double,double,struct { double
x,y; },double);
In this case, the last argument should be passed in %xmm7. This patch
fixes the problem in gfunc_prolog, but not the corresponding problem
in gfunc_call, which I'll try tackling next.
* fix some macro expansion issues
* add some pp tests in tests/pp
* improved tcc -E output for better diff'ability
* remove -dD feature (quirky code, exotic feature,
didn't work well)
Based partially on ideas / researches from PipCet
Some issues remain with VA_ARGS macros (if used in a
rather tricky way).
Also, to keep it simple, the pp doesn't automtically
add any extra spaces to separate tokens which otherwise
would form wrong tokens if re-read from tcc -E output
(such as '+' '=') GCC does that, other compilers don't.
* cleanups
- #line 01 "file" / # 01 "file" processing
- #pragma comment(lib,"foo")
- tcc -E: forward some pragmas to output (pack, comment(lib))
- fix macro parameter list parsing mess from
a3fc543459a715d7143d
(some coffee might help, next time ;)
- introduce TOK_PPSTR - to have character constants as
written in the file (similar to TOK_PPNUM)
- allow '\' appear in macros
- new functions begin/end_macro to:
- fix switching macro levels during expansion
- allow unget_tok to unget more than one tok
- slight speedup by using bitflags in isidnum_table
Also:
- x86_64.c : fix decl after statements
- i386-gen,c : fix a vstack leak with VLA on windows
- configure/Makefile : build on windows (MSYS) was broken
- tcc_warning: fflush stderr to keep output order (win32)
This patch disables the optimization of saving stack pointers lazily,
which didn't fully take into account that control flow might not reach
the stack-saving instructions. I've decided to leave in the extra calls
to vla_sp_save() in case anyone wants to restore this optimization.
Tests added and enabled.
There are two remaining bugs: VLA variables can be modified, and jumping
into the scope of a declared VLA will cause a segfault rather than a
compiler error. Both of these do not affect correct C code, but should
be fixed at some point. Once VLA variables have been made properly
immutable, we can share them with the saved stack pointer and save stack
and instructions.
The old code assumed that if an argument doesn't fit into the available
registers, none of the subsequent arguments do, either. But that's
wrong: passing 7 doubles, then a two-double struct, then another double
should generate code that passes the 9th argument in the 8th register
and the two-double struct on the stack. We now do so.
However, this patch does not yet fix the function calling code to do the
right thing in the same case.
With the x86_64 Linux ELF ABI, we're currently failing two of these
three tests, which have been disabled for now. The problem is mixed
structures such as struct { double x; char c; }, which the x86_64 ABI
specifies are to be passed/returned in one integer register and one SSE
register; our current approach, marking the structure as VT_QLONG or
VT_QFLOAT, fails in this case.
(It's possible to fix this by getting rid of VT_QLONG and VT_QFLOAT
entirely as at https://github.com/pipcet/tinycc, but the changes aren't
properly isolated at present. Anyway, there might be a less disruptive
fix.)
- pop_macro incorrect with initially undefined macro
- horrible implementation (tcc_open_bf)
- crashes eventually (abuse of Sym->prev_tok)
- the (unrelated) asm_label part is the opposite of a fix
(Despite of its name this variable has nothing to do with
the built-in assembler)
This reverts commit 0c8447db79.
I ran into an issue playing with tinycc, and tracked it down to a rather
weird assumption in the function calling code. This breaks only when
varargs and float/double arguments are combined, I think, and only when
calling GCC-generated (or non-TinyCC, at least) code. The problem is we
sometimes generate code like this:
804a468: 4c 89 d9 mov %r11,%rcx
804a46b: b8 01 00 00 00 mov $0x1,%eax
804a470: 48 8b 45 c0 mov -0x40(%rbp),%rax
804a474: 4c 8b 18 mov (%rax),%r11
804a477: 41 ff d3 callq *%r11
for a function call. Note how $eax is first set to the correct value,
then clobbered when we try to load the function pointer into R11. With
the patch, the code generated is:
804a468: 4c 89 d9 mov %r11,%rcx
804a46b: b8 01 00 00 00 mov $0x1,%eax
804a470: 4c 8b 5d c0 mov -0x40(%rbp),%r11
804a474: 4d 8b 1b mov (%r11),%r11
804a477: 41 ff d3 callq *%r11
which is correct.
This becomes an issue when get_reg(RC_INT) is modified not always to
return %rax after a save_regs(0), because then another register (%ecx,
say) is clobbered, and the function passed an invalid argument.
A rather convoluted test case that generates the above code is
included. Please note that the test will not cause a failure because
TinyCC code ignores the %rax argument, but it will cause incorrect
behavior when combined with GCC code, which might wrongly fail to save
XMM registers and cause data corruption.
* give warning if pragma is unknown for tcc
* don't free asm_label in sym_free(),
it's a job of the asm_free_labels().
The above pragmas are used in the mingw headers.
Thise pragmas are implemented in gcc-4.5+ and current
clang.
Commit 5ce2154c ("-fdollar-in-identifiers addon", 20-04-2015) forgot
to include the test files from Daniel's patch.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
* define targetos=Windows when --enable-tcc32-mingw, --enable-cygwin, ...
* use TARGETOS insteed HOST_OS when selecting PROGS
* use "$(tccdir)" insteed $(tccdir) on install (spaces in path)
* install tcc.exe too
* produce bcheck.o when cross-compiling too (lib/Makefile)
* force bcheck.o linking by compiling inside tcc_set_output_type()
a dummy program with local array. Otherwise bcheck.o may be not linked.
* replace %xz format specifier with %p in bcheck (don't supported on
Windows)
* call a __bound_init when __bound_ptr_add, __bound_ptr_indir,
__bound_new_region, __bound_delete_region called.
This is because a __bound_init inside ".init" section is not called
on Windows for unknown reason.
* print on stderr a message when an illegal pointer is returned:
there is no segmentation violation on Windows for a program
compiled with "tcc -b"
* remove "C:" subdir on clean if $HOST_OS = "Linux"
* default CFLAGS="-Wall -g -O0" insteed CFLAGS="-Wall -g -O2"
to speed up compilation and more precise debugging.
------------ libtest ------------
./libtcc_test lib_path=..
<string>:11: warning: implicit declaration of function 'printf'
<string>:13: warning: implicit declaration of function 'add'
------------ test3 ------------
tcctest.c:1982: warning: implicit declaration of function 'putchar'
tcctest.c:2133: warning: implicit declaration of function 'strlen'
int i = i++ causes a segfault because of missing guard. Looking
recursively at all backend functions called from middle end several more
guard appeared to be missing.
From: Matteo Cypriani <mcy@lm7.fr>
Date: Fri, 5 Sep 2014 23:22:56 -0400
Subject: Disable floating-point test for ARM soft-float
tcc is not yet capable of doing softfloat floating-point operations on
ARM, therefore we disable this test for these platforms. Note that tcc
displays a warning to warn ARM users about this limitation
(debian)
From: Matteo Cypriani <mcy@lm7.fr>
Date: Fri, 5 Sep 2014 23:22:56 -0400
Subject: Disable floating-point test for ARM soft-float
tcc is not yet capable of doing softfloat floating-point operations on
ARM, therefore we disable this test for these platforms. Note that tcc
displays a warning to warn ARM users about this limitation
(debian)
A non declared function leads to a seriuos problems. And while
gcc don't turn this warning on lets tcc do it. This warning
can be turned off by -Wno-implicit-function-declaration option.
And autor must explicitly do this if program must be compiled
with this warning off.
* tccpp.c (parse_number): `shift' should be 1 while parsing binary
floating point number.
* tests/tests2/70_floating_point_literals.c: New test cases for
floating point number parsing.
Clear CFLAGS and LDFLAGS to build the tests, in case the main Makefile
passes some flags that aren't handled by tcc (we are not compiling tcc
here, we are using tcc to compile the tests).
- remove -norunsrc switch
Meaning and usage (-run -norun...???) look sort of screwed. Also
general usefulness is unclear, so it was actually to support exactly
one (not even very interesting) test
This partially reverts e31579b076
This provides a simple implementation of alloca for ARM (and enables
the associated testcase). As tcc for ARM doesn't contain an assembler,
we'll have to resort using gcc for compiling it.
Result of float to unsigned integer conversion is undefined if float is
negative. This commit take the absolute value of the float before doing
the conversion to unsigned integer and add more float to integer
conversion test.
- Thanks to Kirill "tcc -b itself" should work now
(was removed in d5f4df09ff)
Also:
- tests/Makefile:
- fix spurious --I from 767410b875
- lookup boundtest.c via VPATH (for out-of-tree build)
- test[123]b?: fail on diff error
- Windows: test3 now works (from e31579b076)
- abitest: a libtcc.a made by gcc is not usable for tcc
on WIndows - using source instead (libtcc.c)
- tccpe:
- avoid gcc warning (x86_64)
negate(x) is subtract(-0,x), not subtract(+0,x), which makes
a difference with signed zeros. Also +x was expressed as x+0,
in order for the integer promotions to happen, but also mangles signed
zeros, so just don't do that with floating types.
- Build libtcc1 for cross-compiler on arm (arm to X cross compilers)
- Install libtcc1 and includes for arm to i386 cross compiler
- Add basic check of cross-compilers (compile ex1.c)
- tccgen: error out for cast to void, as in
void foo(void) { return 1; }
This avoids an assertion failure in x86_64-gen.c, also.
also fix tests2/03_struct.c accordingly
- Error: "memory full" - be more specific
- Makefiles: remove circular dependencies, lookup tcctest.c from VPATH
- tcc.h: cleanup lib, include, crt and libgcc search paths"
avoid duplication or trailing slashes with no CONFIG_MULTIARCHDIR
(as from 9382d6f1a0)
- tcc.h: remove ";{B}" from PE search path
in ce5e12c2f9 James Lyon wrote:
"... I'm not sure this is the right way to fix this problem."
And the answer is: No, please. (copying libtcc1.a for tests instead)
- win32/build_tcc.bat: do not move away a versioned file
stdarg_test in abitest.c relies on a sum of some parameters made by both
the caller and the callee to reach the same result. However, the
variables used to store the temporary result of the additions are not
initialized to 0, leading to uncertainty as to the results. This commit
add this needed initialization.
VLA storage is now freed when it goes out of scope. This makes it
possible to use a VLA inside a loop without consuming an unlimited
amount of memory.
Combining VLAs with alloca() should work as in GCC - when a VLA is
freed, memory allocated by alloca() after the VLA was created is also
freed. There are some exceptions to this rule when using goto: if a VLA
is in scope at the goto, jumping to a label will reset the stack pointer
to where it was immediately after the last VLA was created prior to the
label, or to what it was before the first VLA was created if the label
is outside the scope of any VLA. This means that in some cases combining
alloca() and VLAs will free alloca() memory where GCC would not.
long double arguments require 16-byte alignment on the stack, which
requires adjustment when the the stack offset is not an evven number of
8-byte words.
I really should do this when less tired; I keep breaking one platform
while fixing another. I've also fixed some Windows issues with tcctest
since Windows printf() uses different format flags to those on Linux,
and removed some conditional compilation tests in tcctest since they
now should work.
I removed the XMM6/7 registers from the register list because they are not used
on Win64 however they are necessary for parameter passing on x86-64. I have now
restored them but not marked them with RC_FLOAT so they will not be used except
for parameter passing.
I've had to introduce the XMM1 register to get the calling convention
to work properly, unfortunately this has broken a fair bit of code
which assumes that only XMM0 is used.
There are probably still issues on x86-64 I've missed.
I've added a few new tests to abitest, which fail (2x long long and 2x double
in a struct should be passed in registers).
abitest now passes; however test1-3 fail in init_test. All other tests
pass. I need to re-test Win32 and Linux-x86.
I've added a dummy implementation of gfunc_sret to c67-gen.c so it
should now compile, and I think it should behave as before I created
gfunc_sret.
I expect that Linux-x86 is probably fine. All other architectures
except ARM are definitely broken since I haven't yet implemented
gfunc_sret for these, although replicating the current behaviour
should be straightforward.
Only one test so far, which fails on Windows (with MinGW as the native
compiler - I've tested the MinGW output against MSVC and it appears the
two are compatible).
I've also had to modify tcc.h so that tcc_set_lib_path can point to the
directory containing libtcc1.a on Windows to make the libtcc dependent
tests work. I'm not sure this is the right way to fix this problem.
Modified tcctest.c so that it uses 'double' in place of 'long double'
with MinGW since this is what TCC does, and what Visual C++ does. Added
an option -norunsrc to tcc to allow argv[0] to be set independently of
the compiled source when using tcc -run, which allows tests that rely on
the value of argv[0] to work in out-of-tree builds.
Also added Makefile rules to automatically update out-of-tree build
Makefiles when in-tree Makefiles have changed.
tests/Makefile:
- print-search-dirs when 'hello' fails
- split off hello-run
win32/include/_mingw.h:
- fix for compatibility with mingw headers
(While our headers in win32 are from mingw-64 and don't have
the problem)
tiny_libmaker:
- don't use "dangerous" mktemp
Also:
- fix "make tcc_p" (profiling version)
- remove old gcc flags:
-mpreferred-stack-boundary=2 -march=i386 -falign-functions=0
- remove test "hello" for Darwin (cannot compile to file)
tests:
- add "hello" to test first basic compilation to file/memory
- add "more" test (tests2 suite)
- remove some tests
tests2:
- move into tests dir
- Convert some files from DOS to unix LF
- remove 2>&1 redirection
win32:
- tccrun.c: modify exception filter to exit correctly (needed for btest)
- tcctest.c: exclude weak_test() (feature does not exist on win32)
gcc fails the builtin_frame_address test on ARM so we disable it. As a
consequence, the diff between gcc and tcc's output is unecessarily
bigger. Given the big size of the diff currently, this doesn't make a
big difference but may allow to detect a regression in tcc's
implementation of builtin_frame_address.
* configure (fn_dirname): New.
Use it to ensure the creation of proper symlinks to Makefiles.
(config.mak): Define top_builddir and top_srcdir.
(CPPFLAGS): Be sure to find the headers.
* Makefile, lib/Makefile, tests/Makefile, tests2/Makefile: Adjust
to set VPATH properly.
Fix confusion between top_builddir and top_srcdir.
Just like with test[123] add their test[123]b variants. After previous 3
patchs all test pass here on Debian GNU/Linux on i385 with gcc-4.7 with
or without memory randomization turned on.
After 40a54c43 (Repair bounds-checking runtime), and in particular
5d648485 (Now btest pass!) `make test` was broken on ARCH != i386,
because I've changed btest to unconditionally run on all arches.
But bounds-checking itsels is only supported on i386 and oops...
Fix it.
Reported-by: Thomas Preud'homme <robotux@celest.fr>
Continuing d6072d37 (Add __builtin_frame_address(0)) implement
__builtin_frame_address for levels greater than zero, in order for
tinycc to be able to compile its own lib/bcheck.c after
cffb7af9 (lib/bcheck: Prevent __bound_local_new / __bound_local_delete
from being miscompiled).
I'm new to the internals, and used the most simple way to do it.
Generated code is not very good for levels >= 2, compare
gcc tcc
level=0 mov %ebp,%eax lea 0x0(%ebp),%eax
level=1 mov 0x0(%ebp),%eax mov 0x0(%ebp),%eax
level=2 mov 0x0(%ebp),%eax mov 0x0(%ebp),%eax
mov (%eax),%eax mov %eax,-0x10(%ebp)
mov -0x10(%ebp),%eax
mov (%eax),%eax
level=3 mov 0x0(%ebp),%eax mov 0x0(%ebp),%eax
mov (%eax),%eax mov (%eax),%ecx
mov (%eax),%eax mov (%ecx),%eax
But this is still an improvement and for bcheck we need level=1 for
which the code is good.
For the tests I had to force gcc use -O0 to not inline the functions.
And -fno-omit-frame-pointer just in case.
If someone knows how to improve the generated code - help is
appreciated.
Thanks,
Kirill
Cc: Michael Matz <matz@suse.de>
Cc: Shinichiro Hamaji <shinichiro.hamaji@gmail.com>
We are now compatible with the 0.9,25 version though. A special
value for the second (ptr) argument is used to get the simple
behavior as with the 0.9.24 version.
The intent is for 'make test' to pass cleanly on each platform, and thus easier
spotting of regressions. Linux is best supported by most tests running and
passing. Mac OSX passes mosts tests that do not make/link with binary files,
due to lack of mach-o file support.
!!! I have very limited knowledge of Windows platform, and cannot comment why
all tests(1) fail. I have posted to newsgroup asking for someone to test
Windows platform.
Loads of VT_LLOCAL values (which effectively represent saved
addresses of lvalues) were done in VT_INT type, loosing the upper
32 bits. Needs to be done in VT_PTR type.
Sometimes the result of a comparison is not directly used in a jump,
but in arithmetic or further comparisons. If those further things
do a vswap() with the VT_CMP as current top, and then generate
instructions for the new top, this most probably destroys the flags
(e.g. if it's a bitfield load like in the example).
vswap() must do the same like vsetc() and not allow VT_CMP vtops
to be moved down.
This matters when sizeof is directly used in arithmetic,
ala "uintptr_t t; t &= -sizeof(long)" (for alignment). When sizeof
isn't size_t (as it's specified to be) this masking will truncate
the high bits of the uintptr_t object (if uintptr_t is larger than
uint).
This needs to be accepted:
typedef int foo;
void f (void) { foo: return; }
namespaces for labels and types are different. The problem is that
the block parser always tries to find a decl first and that routine
doesn't peek enough to detect this case. Needs some adjustments
to unget_tok() so that we can call it even when we already called
it once, but next() didn't come around restoring the buffer yet.
(It lazily does so not when the buffer becomes empty, but rather
when the next call detects that the buffer is empty, i.e. it requires
two next() calls until the unget buffer gets switched back).
When offsetted addresses of global non-static data are computed
multiple times in the same statement the x86_64 backend uses
gen_gotpcrel with offset, which implements an add insn on the
register given. load() uses the R member of the to-be-loaded
value, which doesn't yet have a reg assigned in all cases.
So use the register we're supposed to load the value into as
that register.
The first loop setting up struct arguments must not remove
elements from the vstack (via vtop--), as gen_reg needs them to
potentially evict some argument still held in registers to stack.
Swapping the arg in question to top (and back to its place) also
simplifies the vstore call itself, as not funny save/restore
or some "non-existing" stack elements need to be done.
Generally for a stack a vop-- operation conceptually clobbers
that element, so further references to it aren't allowed anymore.
Removes a premature optimization of char/short loads
rewriting the source type. It did so also for bitfield
loads, thereby removing all the shifts/maskings.
(cond ? 0 : ptr)->member wasn't handled correctly. If one arm
is a null pointer constant (which also can be a pointer) the result
type is that of the other arm.
- tests/Makefile:
fix commit de54586d5b
This hunk it unrelated to the other changes (which are about MacOSX).
It is not useful and partially wrong. Optional tests are meant to
stay optional, btest would work only for i386
- tcc.h:
fix commit c52d79605a by unknown
The message says it's for MINTW but the patch has obviously
no effect for MINGW (which defines __GNUC__). However the patch
seems useful for MSC which however needs _strto(u)i64 with underscore.
- Makefile:
fix commit 5280293d6b
Do not build tcc.o with -DONE_SOURCE because we finally build tcc
from tcc.o and libtcc.a/so
Other tests still have issues, currently with weak linking.
One of the primary stumbling blocks on OSX is the lack of support for
mach-o binaries. Therefore all tcc usage on OSX has to be limited to elf
binaries, presumably produced by tcc itself.
Therefore I had to enable building of tiny_libmaker for OSX. Then changed
the make to use tcc and tiny_libmaker to compile the tcclib1.
In order to compile the tests, specifically the parts that use weak linking,
I have had to define MACOSX_DEPLOYMENT_TARGET to 10.2, which seems like a
hack, but extensive searching seems to indicate that this is the only way
to make apple gcc allow weak linking. Using any other value, bigger or smaller
breaks weak linking.
Also added _ANSI_SOURCE define required by some OSX headers, and some cosmetic
gitignore changes. I believe these changes should not impact other platforms.
Applied patch found on stackoverflow (link below). I also found some
related changes that looked like logically needed. The stackoverflow
changes addressed only two registers which were breaking a compile.
However reading the code in the same file shows two other register
accesses that, while not breaking the build, should have the same fix.
http://stackoverflow.com/questions/3712902/problems-compiling-tcc-on-os-x/3713144#3713144
The test driver was changed by changing 'cp -u' into 'cp' as '-u' is not
supported on mac osx.
I found that osx build required the WITHOUT_LIBTCC define. I suspect the
reason for this is tcc unability to handle mach-o files. In order to
properly address this I had to change 'configure' to propagate target os
name to Makefile.
Current state is that simple tests work, but not the whole 'make test'
suite runs.
To the best of my knowledge, these changes should not impact other
platforms.
array_test is declared and called with no parameters but defined with
one parameter. Compilation succeed (definition is after the use so the
compiler consider the declaration) as well as link (the function exist
and has the right name) but running the test segfault on i386 platforms.
This patch moves the parameter to local variable. If the intention was
to call it with an array parameter then feel free to fix it again.