Commit Graph

2252 Commits

Author SHA1 Message Date
grischka
90316c7c26 tcc: don't use pstrcpy, fix win32 spanwn quoting
- we're now exporting tcc_prefixed symbols from libtcc only

- On windows, the msvcrt startup code would remove backslashes
  from commandline arguments such as
      -DFOO=\"foo\"
  which would appear in argv as
      -DFOO="foo"
  Therefor before passing these to spawnvp, we need to restore
  the backslashes.
2017-02-08 19:49:28 +01:00
grischka
68666eee2a tccgen: factor out gfunc_return
Also:
- on windows i386 and x86-64, structures of size <= 8 are
  NOT returned in registers if size is not one of 1,2,4,8.
- cleanup: put all tv-push/pop/swap/rot into one place
2017-02-08 19:45:31 +01:00
grischka
f077d16c20 tccgen: gen_cast: cast FLOAT to DOUBLE
... to avoid precision loss when casting to int,
also when saving FLOATs to stack
2017-02-05 14:30:20 +01:00
grischka
3b84e61ead Revert "partial revert of the commit 4ad186c5ef61"
There seems nothing wrong.  With

    int t1 = 176401255;
    float f = 0.25;
    int t2 = t1 * f; // 176401255 * 0.25 = 44100313.75

according to the arithmetic conversion rules, the number
176401255 needs to be converted to float, and the compiler
can choose either the nearest higher or nearest lower
representable number "in an implementation-defined manner".

Which may be 176401248 or 176401264.  So as result both
44100312 and 44100313 are correct.

This reverts commit 664c19ad5e.
2017-02-05 14:30:19 +01:00
grischka
85fca9e924 tccrun: sort sections
Sort executable before other sections.

Also, apply RUN_SECTION_ALIGNMENT=63 for TCC_TARGET_I386
as well.
2017-02-05 14:00:42 +01:00
grischka
ea2c36c5a9 tccrun: 'selinux' mmap: use only one mapping
In the previous implementation, the rx mapping was never
used.  Therefor it is assumed that it is not needed.

With only one mapping there is no reason to use a real
/tmp/.xxxx file either as we can use an anonymous mapping.
2017-02-05 13:58:14 +01:00
David Mertens
5420bb8a67 SECTION_ALIGNMENT -> RUN_SECTION_ALIGNMENT, and tweaks
Based on feedback from grischka, this commit
(1) updates the name of the alignment constant to be more specific
(2) aligns all sections, including the first (which previosly was
    not aligned)
(3) reduces the x86-64 alignment from 512 to 64 bytes.

The original x86-64 alignment of 512 bytes was based on testing.
After ensuring that the initial section is also aligned, the same
tests indicated that 64 bytes is sufficient.
2017-01-08 07:27:24 -05:00
David Mertens
5d1bc3fbd4 Architecture-specific section alignment
Tests found excessive cache thrashing on x86-64 architectures. The
problem was traced to the alignment of sections. This patch sets up
an architecture-specific alignment of 512 bytes for x86-64 and 16
bytes for all others. It uses preprocessor directives that, hopefully,
make it easy to tweak for other architectures.
2017-01-06 16:56:23 -05:00
Pavlas, Zdenek
0486939291 win32: support "-Wl,--large-address-aware" option 2016-12-30 03:01:33 -08:00
Avi Halachmi (:avih)
9b3e4c5895 tests: don't assume $(CC) is gcc
This also allows self hosting + testing when $(CC) is tcc.
2016-12-24 20:59:10 +02:00
grischka
71c5ce5ced tests: OOT build fixes etc.
tests/Makefile: fix out-of-tree build issues

Also:

- win64: align(16) MEM_DEBUG user memory
  on win64 the struct jmp_buf in the TCCState structure which we
  allocate by tcc_malloc needs alignment 16 because the msvcrt
  setjmp uses MMX instructions.

- libtcc_test.c: win32/64 need __attribute__((dllimport)) for
  extern data objects

- tcctest.c: exclude stuff that gcc does not compile
  except for relocation_test() the other issues are mostly ASM
  related.  We should probably check GCC versions but I have
  no idea which mingw/gcc versions support what and which don't.

- lib/Makefile: use tcc to compile libtcc1.a (except on arm
  which needs arm-asm
2016-12-20 18:05:33 +01:00
Michael Matz
4beb469c91 Fix pseudo leak
Once-allocated buffers (here a string) that aren't explicitely freed
at program end but rather freed at _exit about 1 nanosecond later
are regarded a leak with MEM_DEBUG, so explicitely free it.  Blaeh :-/
2016-12-20 05:42:25 +01:00
Michael Matz
42e2a67f23 Fix some code suppression fallout
Some more subtle issues with code suppression:
- outputting asms but not their operand setup is broken
- but global asms must always be output
- statement expressions are transparent to code suppression
- vtop can't be transformed from VT_CMP/VT_JMP when nocode_wanted

Also remove .exe files from tests2 if they don't fail.
2016-12-20 04:58:34 +01:00
grischka
559ee1e940 i386-gen: fix USE_EBX
Restore ebx from *ebp because alloca might change esp.

Also disable USE_EBX for upcoming release.

Actually the benefit is less than one would expect, it
appears that tcc can't do much with more than 3 registers
except with extensive use of long longs where the disassembly
looks much prettier (and shorter also).

Also: tccgen/expr_cond() : fix wrong gv/save_regs order
2016-12-19 00:33:01 +01:00
grischka
d2332396e4 libtcc.c: -m option cleanup
handle mms-bitfields as sub-options of -m. (-mfloat-abi
is still special because it requires arguments)

tcc.c: help():
- list -mms-bitfields under 'Target specific options'

libtcc.c/MEM_DEBUG
- add check for past buffer writes
2016-12-18 22:57:03 +01:00
grischka
a1c12b9fb9 tests: add memory leak test
Also ...

tcctest.c:
- exclude stuff that gcc doesn't compile on windows.

libtcc.c/tccpp.c:
- use unsigned for memory sizes to avoid printf format warnings
- use "file:line: message" to make IDE error parsers happy.

tccgen.c: fix typo
2016-12-18 22:05:42 +01:00
grischka
f7fc4f02cf tccgen: nocode_wanted++/--
uses 'nocode_wanted' as a level couter instead of
'saved_nocode_wanted' everywhere.
2016-12-18 18:58:33 +01:00
grischka
e5efd18435 tccgen: fix expr_cond for alt. nocode_wanted
making shure that both the active and the passive branches
do exacly the same thing.
2016-12-18 18:55:55 +01:00
grischka
f843cadb6b tccgen: nocode_wanted alternatively
tccgen.c: remove any 'nocode_wanted' checks, except in
- greloca(), disables output elf symbols and relocs
- get_reg(), will return just the first suitable reg)
- save_regs(), will do nothing

Some minor adjustments were made where nocode_wanted is set.

xxx-gen.c: disable code output directly where it happens
in functions:
- g(), output disabled
- gjmp(), will do nothing
- gtst(), dto.
2016-12-18 18:53:21 +01:00
Michael Matz
77d7ea04ac Fix gawk miscompile
See testcase.  Function pointer use was hosed when the destination
function wasn't also called normally by the program.
2016-12-18 05:20:14 +01:00
Michael Matz
cd9514abc4 i386: Fix various testsuite issues
on 32bit long long support was sometimes broken.  This fixes
code-gen for long long values in switches, disables a x86-64 specific
testcase and avoid an undefined shift amount.  It comments out
a bitfield test involving long long bitfields > 32 bit; with GCC layout
they can straddle multiple words and code generation isn't prepared
for this.
2016-12-15 17:53:09 +01:00
Michael Matz
3980e07fe5 arm64: Handle R_AARCH64_PREL32 again
This got lost when splitting reloc handling to individual files.
2016-12-15 17:49:57 +01:00
Michael Matz
b155432b65 arm64: Fix largeptr test
VT_PTR needs to be handled like VT_LLONG.
2016-12-15 17:49:56 +01:00
Michael Matz
b5b12b89a0 arm64: Fix a case of dead code suppression
82_nocode_wanted.c:kb_wait_2_1 was miscompiled on arm64.
2016-12-15 17:49:56 +01:00
Michael Matz
f5ae4daa5f struct-layout: Allow lowering of member alignment
when an alignment is explicitely given on the member itself,
or on its types attributes then respect it always.  Was only
allowed to increase before, but GCC is allowing it.
2016-12-15 17:49:56 +01:00
Michael Matz
8859dc9e6d Support large alignment requests
The linux kernel has some structures that are page aligned,
i.e. 4096.  Instead of enlarging the bit fields to specify this,
use the fact that alignment is always power of two, and store only
the log2 minus 1 of it.  The 5 bits are enough to specify an alignment
of 1 << 30.
2016-12-15 17:49:56 +01:00
Michael Matz
d815a0f658 struct-layout: cleanup code a bit 2016-12-15 17:49:56 +01:00
Michael Matz
23b257a8d2 bitfields: Fix MS layout some more
Another corner case:
  struct foo6_1
  {
    char x;
    short p:8;
    short :0;
    short :0;
    short p2:8;
    char y;
  };

In MS layout the second anon :0 bit-field does _not_ adjust size or
alignment of the struct again.  The first one does, though.
2016-12-15 17:49:56 +01:00
Michael Matz
ed680da951 bitfields: fix PCC layout
Fixes some corner cases in PCC layout.  Testcases coming
up.
2016-12-15 17:49:56 +01:00
Michael Matz
bd69bce20f bitfields: Implement MS compatible layout
Bit-fields are layed out differently in visual C, this implements
a compatible mode.  Checked against Visual C/C++ 2016.
Unfortunately the GCC implementation of MS layout (behind
-mms-bitfields) actually is different, and hence not compatible
with MS in all cases :-/
2016-12-15 17:49:56 +01:00
Michael Matz
78c7096162 Fix struct layout some more
Anonymous sub-sub-members weren't handled correctly.  Bit-fields
neither: this implements PCC layout for now.  It temporarily disables
MS-compatible bit-field layout.
2016-12-15 17:49:56 +01:00
Michael Matz
ddecb0e685 Split off record layouting
Such struct decl:

  struct S { char a; int i;} __attribute__((packed));

should be accepted and cause S to be five bytes long (i.e.
the packed attribute should matter).  So we can't layout
the members during parsing already.  Split off the offset
and alignment calculation for this.
2016-12-15 17:49:56 +01:00
Michael Matz
5d6a9e797a x86-asm: Fix segfault
We need to access cur_text_section, not text_section.
2016-12-15 17:49:56 +01:00
Michael Matz
22f5fccc2c Fix 64bit enums and switch cases
See testcases.  We now support 64bit case constants.  At the same time
also 64bit enum constants on L64 platforms (otherwise the Sym struct
isn't large enough for now).  The testcase also checks for various
cases where sign/zero extension was confused.
2016-12-15 17:49:56 +01:00
Michael Matz
3e77bfb6e9 tccpp: Fix token pasting
See testcase.  We must always paste tokens (at least if not
currently substing a normal argument, which is a speed optimization
only now) but at the same time must not regard a ## token
coming from argument expansion as the token-paste operator, nor
if we constructed a ## token due to pasting itself (that was already
checked by pp/01.c).
2016-12-15 17:49:56 +01:00
Michael Matz
3db037387c libtcc1: Don't use stdlib functions
libtcc1 is the compiler support library and therefore needs
to function in a freestanding environment.  In particular
it can't just use fprintf or stderr, which it was on x86-64
(but only when compiled by GCC).  The tight integration between
libtcc1 and tcc itself makes it impossible to ever reach that
case so the abort() there is enough.  abort() is strictly speaking
also not available in a freestanding environment, but it often is
nevertheless.
2016-12-15 17:49:56 +01:00
Michael Matz
d042e71e9f Fix miscompile with dead switches
In certain very specific situations (involving switches
with asms inside dead statement expressions) we could generate
invalid code (clobbering the buffer so much that we generated
invalid instructions).  Don't emit the decision table if the
switch itself is dead.
2016-12-15 17:49:55 +01:00
Michael Matz
7ae35bf1bb Handle multiple -O options
the last one wins, i.e. "-O2 -O0" does _not_ set __OPTIMIZ__.
2016-12-15 17:49:55 +01:00
Michael Matz
a158260e84 build: Respect CPPFLAGS override
so that e.g. make CPPFLAGS=-DSOMETHING works.
2016-12-15 17:49:55 +01:00
Michael Matz
235711f3d3 64bit: Fix addends > 32 bits
If a symbolic reference is offsetted by a constant > 32bit
the backends can't deal with that, so don't construct such
values.
2016-12-15 17:49:55 +01:00
Michael Matz
a2a596e767 x86-64-asm: Accept high register in clobbers
The callee saved registers (among them r12-r15) really need
saving/restoring if mentioned in asm clobbers, even if TCC
itself doesn't use them.  E.g. the linux kernel relies on that
in its switch_to() implementation.
2016-12-15 17:49:55 +01:00
Michael Matz
ddd461dcc8 Fix initializing members multiple times
When intializing members where the initializer needs relocations
and the member is initialized multiple times we can't allow
that to lead to multiple relocations to the same place.  The last
one must win.
2016-12-15 17:49:53 +01:00
Michael Matz
f081acbfba Support local register variables
Similar to GCC a local asm register variable enforces the use of a
specified register in asm operands (and doesn't otherwise
matter).  Works only if the variable is directly mentioned as
operand.  For that we now generally store a backpointer from
an SValue to a Sym when the SValue was the result of unary()
parsing a symbol identifier.
2016-12-15 17:47:13 +01:00
Michael Matz
3bc9c325c5 Fix const folding of 64bit pointer constants
See testcase.
2016-12-15 17:47:12 +01:00
Michael Matz
0b0e64c2c9 x86-asm: Correct register size for pointer ops
A pointer is 64 bit as well, so it needs a full
register for register operands.
2016-12-15 17:47:12 +01:00
Michael Matz
7ab35c6265 struct-init: Copy relocs for compound literals
When copying the content of compound literals we must
include relocations as well.
2016-12-15 17:47:12 +01:00
Michael Matz
0bca6cab06 x86_64-asm: fix copy-out registers
If the destination is an indirect pointer access (which ends up
as VT_LLOCAL) the intermediate pointer must be loaded as VT_PTR,
not as whatever the pointed to type is.
2016-12-15 17:47:12 +01:00
Michael Matz
ad723a419f x86_64: Add -mno-sse option
This disables generation of any SSE instructions (in particular
in stdarg function prologues).  Necessary for kernel compiles.
2016-12-15 17:47:12 +01:00
Michael Matz
b5669a952b x86-64: relocation addend is 64bit
Some routines were using the wrong type (int) in passing addends,
truncating it.  This matters when bit 31 isn't set and the high
32 bits are set: the truncation would make it unsigned where in
reality it's signed (happen e.g. on the x86-64 with it's load
address at top-2GB).
2016-12-15 17:47:12 +01:00
Michael Matz
975c74c1f5 x86-64: Prefer 32S relocations
This target has _32 and _32S relocs (the latter being for signed
32 bit entities).  All instruction displacements have to use
the 32S variants.  Normal references like
  .long s
normally would use the _32 variant.  For normal executables this
doesn't matter.  For shared libraries neither (which use PC-relative
relocs).  But it matters for things like the kernel that are linked
to high addresses (signed ones).  There the GNU linker would error
out on overflow for the _32 variant.

To keep life simple we simply switch from _32 to _32S altogether.
Strictly speaking it's still wrong, but in practice using _32 is
more often wrong than using _32S ;)
2016-12-15 17:47:12 +01:00