import GCC 4.1 branch from today. it includes these bugs fixed since

our last 4.1 branch import, plus a few other changes:
	c/27718 26242 c++/27451 c/26818 tree-optimization/26622
	target/27758 middle-end/27743 middle-end/27620
	tree-optimization/27549 tree-optimization/27283
	target/26600 c++/26757 driver/26885 tree-optimization/27603
	rtl-optimization/14261 rtl-optimization/22563 middle-end/26729
	rtl-optimization/27335 target/27421 middle-end/27384
	middle-end/27488 target/27158 bootstrap/26872 target/26545
	tree-optimization/27136 tree-optimization/27409 middle-end/27260
	tree-optimization/27151 target/26481 target/26765
	target/26481 tree-optimization/27285 optimization/25985
	tree-optimization/27364 c/25309 target/27387 target/27374
	middle-end/26565 target/26826 tree-optimization/27236
	middle-end/26869 tree-optimization/27218 rtl-optimization/26685
	tree-optimization/26865 target/26961 target/21283 c/26774 c/25875
	mudflap/26789
This commit is contained in:
mrg 2006-06-03 05:11:23 +00:00
parent a55e9cdf1a
commit 1083f0866d
30 changed files with 958 additions and 286 deletions

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-04-04 Alex Deiter <tiamat@komi.mts.ru>
PR bootstrap/27023

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-03-31 H.J. Lu <hongjiu.lu@intel.com>
Backport from mainline

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.

View File

@ -1 +1 @@
4.1.1
4.1.2

View File

@ -1,3 +1,462 @@
2006-06-01 Alan Modra <amodra@bigpond.net.au>
* config/rs6000/rs6000.c (rs6000_gimplify_va_arg): Consume all
fp regs if the last fp arg doesn't fit in regs.
2006-05-31 Jie Zhang <jie.zhang@analog.com>
* config/bfin/bfin.c (bfin_delegitimize_address): New.
(TARGET_DELEGITIMIZE_ADDRESS): Define.
2006-05-30 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
PR c/27718
* c-typeck.c (c_expr_sizeof_type): Handle invalid types.
2006-05-29 Diego Novillo <dnovillo@redhat.com>
PR 26242
* passes.texi: Add documentation for pass_vrp,
pass_fre, pass_store_ccp, pass_copy_prop,
pass_store_copy_prop, pass_merge_phi, pass_nrv,
pass_return_slot, pass_object_sizes, pass_lim,
pass_linear_transform, pass_empty_loop, pass_complete_unroll,
and pass_stdarg.
2006-05-29 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
PR c++/27451
* stmt.c (expand_asm_operands): Skip asm statement with erroneous
clobbers.
PR c/26818
* c-decl.c (finish_struct): Skip erroneous fields.
2006-05-28 Kazu Hirata <kazu@codesourcery.com>
PR tree-optimization/26622.
* fold-const.c (fold_ternary) <COND_EXPR>: Call fold_convert
on arg1.
2006-05-26 Eric Botcazou <ebotcazou@adacore.com>
* doc/invoke.texi (Optimize Options): Document that -funit-at-a-time
is enabled at -O and above.
2006-05-26 Jakub Jelinek <jakub@redhat.com>
PR target/27758
Backported from mainline
2006-01-25 Andrew Pinski <pinskia@physics.uc.edu>
PR target/25758
* config/i386/i386.c (output_pic_addr_const) <case SYMBOL_REF>:
Use output_addr_const instead of assemble_name.
2006-05-26 Richard Guenther <rguenther@suse.de>
PR middle-end/27743
* fold-const.c (fold_binary): Do not look at the stripped
op0 for (a OP c1) OP c2 to a OP (c1+c2) shift optimization.
2006-05-24 Mark Mitchell <mark@codesourcery.com>
* DEV-PHASE: Set to prerelease.
* BASE-VER: Increment.
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-05-22 Gerald Pfeifer <gerald@pfeifer.com>
* doc/install.texi (Configuration): Remove reference to CrossGCC
FAQ which was hijacked.
(Building): Ditto.
2006-05-17 H.J. Lu <hongjiu.lu@intel.com>
* Makefile.in: Undo the last 2 changes.
* optc-gen.awk: Likewise.
* common.opt: Undo the last change.
* doc/options.texi: Likewise.
* gcc.c: Likewise.
* opts.c: Likewise.
* opts.h: Likewise.
* opts-common.c: Removed.
2005-05-17 Bernd Schmidt <bernd.schmidt@analog.com>
PR middle-end/27620
* expr.c (safe_from_p): Handle CONSTRUCTOR again.
2006-05-17 Zdenek Dvorak <dvorakz@suse.cz>
PR tree-optimization/27548
* tree-scalar-evolution.c (scev_const_prop): Do not prolong life
range of ssa names that appear on abnormal edges.
* tree-ssa-loop-ivopts.c (contains_abnormal_ssa_name_p): Export.
* tree-flow.h (contains_abnormal_ssa_name_p): Declare.
2006-05-17 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/27549
Backported from mainline
2006-05-01 Zdenek Dvorak <dvorakz@suse.cz>
PR tree-optimization/27283
* tree-ssa-loop-ivopts.c (struct nfe_cache_elt): Store just trees,
not whole # of iteration descriptions.
(niter_for_exit): Return just # of iterations. Fail if # of iterations
uses abnormal ssa name.
(niter_for_single_dom_exit): Ditto.
(find_induction_variables, may_eliminate_iv): Expect niter_for_exit to
return just the number of iterations.
(add_iv_outer_candidates, may_replace_final_value): Likewise.
2006-05-16 H.J. Lu <hongjiu.lu@intel.com>
* Makefile.in (GCC_OBJS): Replace options.o with gcc-options.o.
(gcc-options.o): New rule.
* optc-gen.awk: Protect variables for gcc-options.o with
#ifdef GCC_DRIVER/#endif.
2006-05-16 Roger Sayle <roger@eyesopen.com>
PR target/26600
* config/i386/i386.c (legitimate_constant_p) <CONST_DOUBLE>: TImode
integer constants other than zero are only legitimate on TARGET_64BIT.
<CONST_VECTOR> Only zero vectors are legitimate.
(ix86_cannot_force_const_mem): Integral and vector constants can
always be put in the constant pool.
2006-05-16 Andrew MacLeod <amacleod@redhat.com>
PR c++/26757
* tree-dfa.c (struct walk_state): Remove.
(add_referenced_var): Change Parameters.
(find_referenced_vars): Done use a walk_state.
(find_vars_r): Unused parameter and change parms to add_referenced_var.
(referenced_var_insert): Assert same UID has not been inserted.
(add_referenced_var): Check if var exists via referenced_var table.
(get_virtual_var): Call add_referenced_var with new parameter.
2006-05-16 H.J. Lu <hongjiu.lu@intel.com>
PR driver/26885
* Makefile.in (GCC_OBJS): New.
(OBJS-common): Add opts-common.o.
(xgcc$(exeext)): Replace gcc.o with $(GCC_OBJS).
(cpp$(exeext)): Likewise.
(gcc.o): Also depend on opts.h.
(opts-common.o): New.
* common.opt (gcoff): Add Negative(gdwarf-2).
(gdwarf-2): Add Negative(gstabs).
(gstabs): Add Negative(gstabs+).
(gstabs+): Add Negative(gvms).
(gvms): Add Negative(gxcoff).
(gxcoff): Add Negative(gxcoff+).
(gxcoff+): Add Negative(gcoff).
* config/i386/i386.opt (m32): Add Negative(m64).
(m64): Add Negative(m32).
* doc/options.texi: Document the Negative option.
* gcc.c: Include "opts.h".
(main): Call prune_options after expandargv.
* optc-gen.awk: Generate common declarations for all flag
variables in options.c. Output the neg_index field.
* opts.c (find_opt): Moved to ...
* opts-common.c: Here. New file.
* opts.h (cl_option): Add a neg_index field.
(find_opt): New.
(prune_options): Likewise.
2006-05-16 Richard Guenther <rguenther@suse.de>
PR tree-optimization/27603
* tree-ssa-loop-niter.c (infer_loop_bounds_from_undefined):
Do computation in original type, do division only for nonzero
steps.
2006-05-15 Andreas Krebbel <krebbel1@de.ibm.com>
* expmed.c (store_bit_field): Handle paradoxical subregs on big endian
machines.
2006-05-15 Andreas Krebbel <krebbel1@de.ibm.com>
PR rtl-optimization/14261
* ifcvt.c (noce_emit_move_insn): Call store_bit_field if the resulting
move would be an INSV insn.
(noce_process_if_block): Don't optimize if the destination is a
ZERO_EXTRACT which can't be handled by noce_emit_move_insn.
2006-05-15 Roger Sayle <roger@eyesopen.com>
PR rtl-optimization/22563
Backports from mainline
* expmed.c (store_fixed_bit_field): When using AND and IOR to store
a fixed width bitfield, always force the intermediates into pseudos.
Also check whether the bitsize is valid for the machine's "insv"
instruction before moving the target into a pseudo for use with
the insv.
* config/i386/predicates.md (const8_operand): New predicate.
* config/i386/i386.md (extv, extzv, insv): Use the new
const8_operand predicate where appropriate.
2006-05-13 Roger Sayle <roger@eyesopen.com>
PR middle-end/26729
* fold-const.c (fold_truthop): Check integer_nonzerop instead of
!integer_zerop to avoid problems with TREE_OVERFLOW.
2005-05-13 Zdenek Dvorak <dvorakz@suse.cz>
PR rtl-optimization/27335
* loop-unroll.c (peel_loops_completely): Use loops->parray to walk the
loops.
2006-05-12 Andreas Krebbel <krebbel1@de.ibm.com>
* config/s390/s390.c (s390_const_ok_for_constraint_p): Disallow -4G for
On contraint.
* config/s390/s390.md: Adjust comment describing On constraint.
2006-05-11 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
PR target/27421
* config/i386/i386.c (classify_argument): Skip fields with invalid
types.
PR middle-end/27384
* fold-const.c (size_binop): Move sanity check for arguments to
the beginning of the function.
PR middle-end/27488
* fold-const.c (tree_expr_nonnegative_p): Return early on invalid
expression.
2006-05-11 Roger Sayle <roger@eyesopen.com>
PR target/27158
* reload.c (find_reloads_toplev): Only return the simplified SUBREG
of a reg_equiv_constant if the result is a legitimate constant.
2006-05-09 Steve Ellcey <sje@cup.hp.com>
PR bootstrap/26872
* config.gcc (hppa[12]*-*-hpux10*): Set gas to yes.
(hppa*64*-*-hpux11*): Ditto.
(hppa[12]*-*-hpux11*): Ditto.
2006-05-09 David Edelsohn <edelsohn@gnu.org>
PR target/26545
* config/rs6000/aix41.h (TARGET_64BIT): Define.
2006-05-09 Richard Guenther <rguenther@suse.de>
PR tree-optimization/27136
* tree-ssa-loop-niter.c (get_val_for): Correct function
comment, assert requirements.
(loop_niter_by_eval): Stop processing if the iterated
value did not simplify.
2006-05-09 Richard Guenther <rguenther@suse.de>
PR tree-optimization/27409
* tree-ssa-structalias.c (get_constraint_for_component_ref):
Do not try to find zero-sized subvars.
2006-05-08 Alan Modra <amodra@bigpond.net.au>
PR middle-end/27260
* builtins.c (expand_builtin_memset): Expand val in original mode.
2006-05-06 Richard Guenther <rguenther@suse.de>
PR tree-optimization/27151
* tree-vect-transform.c (vectorizable_condition): Punt on
values that have a different type than the condition.
2006-05-04 David Edelsohn <edelsohn@gnu.org>
PR target/26481
* config/rs6000/rs6000.md (stmsi_power): Mark clobber constraint
with output modifier.
2006-05-04 Richard Sandiford <richard@codesourcery.com>
PR target/26765
* config/mips/mips.c (mips_symbolic_address_p): Return true
for SYMBOL_TLSGD, SYMBOL_TLSLDM, SYMBOL_DTPREL, SYMBOL_TPREL,
SYMBOL_GOTTPREL, and SYMBOL_TLS.
2006-05-04 David Edelsohn <edelsohn@gnu.org>
PR target/26481
* config/rs6000/rs6000.md (store_multiple_power): Delete.
(stmsi[345678]_power): New.
2006-05-04 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/27285
Backport from mainline:
2006-03-28 Zdenek Dvorak <dvorakz@suse.cz>
PR tree-optimization/25985
* tree-ssa-loop-niter.c (number_of_iterations_le,
number_of_iterations_ne): Make comments more precise.
(number_of_iterations_cond): Add only_exit argument. Use the
fact that signed variables do not overflow only when only_exit
is true.
(loop_only_exit_p): New.
(number_of_iterations_exit): Pass result of loop_only_exit_p to
number_of_iterations_cond.
2006-05-02 Jeff Law <law@redhat.com>
PR tree-optimization/27364
* tree-vrp.c (vrp_int_const_binop): Fix detection of overflow from
multiply expressions.
2006-05-02 Roger Sayle <roger@eyesopen.com>
PR c/25309
* c-typeck.c (struct spelling): Make I an unsigned HOST_WIDE_INT.
(push_array_bounds): Delete prototype. Change BOUNDS argument to
an unsigned HOST_WIDE_INT.
(print_spelling): Use HOST_WIDE_INT_PRINT_UNSIGNED to output the
array index.
(really_start_incremental_init): No need to call convert because
bitsize_zero_node is already of type bitsizetype.
(push_init_level): Extract the value of constructor_index as an
unsigned HOST_WIDE_INT quantity, using tree_low_cst.
(process_init_element): Likewise.
2006-05-02 Kazu Hirata <kazu@codesourcery.com>
PR target/27387
* arm.c (arm_output_mi_thunk): Use pc-relative addressing when
-mthumb -fPIC are used.
2006-05-01 Kazu Hirata <kazu@codesourcery.com>
PR target/27374
* config/arm/vfp.md (*arm_movdi_vfp): Correct the output
templates for case 3 and 4.
2006-05-01 Richard Guenther <rguenther@suse.de>
PR middle-end/26565
* builtins.c (get_pointer_alignment): Handle component
references for field alignment.
2006-04-28 Richard Guenther <rguenther@suse.de>
PR target/26826
* reload.c (push_reload): Guard calls to get_secondary_mem
for memory subregs.
2006-04-28 Andrew Pinski <pinskia@gcc.gnu.org>
Richard Guenther <rguenther@suse.de>
PR tree-optimization/27236
* tree-inline.c (copy_body_r): Make sure to copy
TREE_THIS_VOLATILE flag.
2006-04-28 Richard Guenther <rguenther@suse.de>
PR middle-end/26869
* tree-complex.c (update_parameter_components): Don't handle
unused parameters which have no default def.
2006-04-28 Andrew Pinski <pinskia@gcc.gnu.org>
Richard Guenther <rguenther@suse.de>
PR tree-optimization/27218
* tree-inline.c (expand_call_inline): Strip useless type
conversions for the return slot address.
2006-04-27 Richard Guenther <rguenther@suse.de>
PR rtl-optimization/26685
* params.def (PARAM_MAX_CSE_INSNS): Correct typo that named
this one "max-flow-memory-locations".
2006-04-25 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/26865
* tree-ssa-structalias.c (find_func_aliases): Check that anyoffsetrhs
type is pointer or array type.
2006-04-24 Roger Sayle <roger@eyesopen.com>
PR target/26961
* fold-const.c (fold_ternary): When converting "A ? B : C" into either
"A op B" or "A op C", we may need to convert A to the type of B and C.
2006-04-23 Roger Sayle <roger@eyesopen.com>
PR target/21283
* config/fr30/fr30.md (define_split): Avoid calling gen_lowpart on
a SImode SUBREG of a floating point register after no_new_pseudos.
2006-04-23 Roger Sayle <roger@eyesopen.com>
* config/fr30/fr30.md (addsi_small_int): Use REGNO_PTR_FRAME_P to
identify potentially eliminable registers to additionally catch
VIRTUAL_INCOMING_ARGS_REGNUM.
(addsi3): Update the conditions on when to use addsi_small_int.
2006-04-23 Eric Botcazou <ebotcazou@adacore.com>
* tree-tailcall.c (pass_tail_recursion): Use gate_tail_calls too.
2006-04-21 Carlos O'Donell <carlos@codesourcery.com>
Backport from mainline:
2006-04-19 Carlos O'Donell <carlos@codesourcery.com>
Nathan Sidwell <nathan@codesourcery.com>
PR c/26774
* stor-layout.c (update_alignment_for_field): Do not align
ERROR_MARK nodes.
(place_union_field): Place union field at the start of the union.
(place_field): Move ERROR_MARK check later, and use the current
allocation position to maintain monotonicity.
2006-04-21 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
PR c/25875
* c-typeck.c (digest_init): Robustify.
2006-04-21 Steve Ellcey <sje@cup.hp.com>
* config/pa/t-pa64: Add dependencies on $(GCC_PASSES).
2006-04-21 Paul Brook <paul@codesourcery.com>
Backport from mainline.
* config/arm/arm.c (arm_override_options): Error on iWMMXt and
hardware floating point.
2006-04-20 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
PR mudflap/26789
* tree-mudflap.c (mudflap_finish_file): Skip function when there were
errors.
2006-04-20 Kaz Kojima <kkojima@gcc.gnu.org>
PR target/27182

View File

@ -1 +1 @@
20060420
20060603

View File

@ -5345,6 +5345,9 @@ finish_struct (tree t, tree fieldlist, tree attributes)
saw_named_field = 0;
for (x = fieldlist; x; x = TREE_CHAIN (x))
{
if (TREE_TYPE (x) == error_mark_node)
continue;
DECL_CONTEXT (x) = t;
if (TYPE_PACKED (t) && TYPE_ALIGN (TREE_TYPE (x)) > BITS_PER_UNIT)

View File

@ -88,7 +88,6 @@ static tree convert_for_assignment (tree, tree, enum impl_conv, tree, tree,
static tree valid_compound_expr_initializer (tree, tree);
static void push_string (const char *);
static void push_member_name (tree);
static void push_array_bounds (int);
static int spelling_length (void);
static char *print_spelling (char *);
static void warning_init (const char *);
@ -2103,7 +2102,8 @@ c_expr_sizeof_type (struct c_type_name *t)
type = groktypename (t);
ret.value = c_sizeof (type);
ret.original_code = ERROR_MARK;
pop_maybe_used (C_TYPE_VARIABLE_SIZE (type));
pop_maybe_used (type != error_mark_node
? C_TYPE_VARIABLE_SIZE (type) : false);
return ret;
}
@ -4229,7 +4229,7 @@ struct spelling
int kind;
union
{
int i;
unsigned HOST_WIDE_INT i;
const char *s;
} u;
};
@ -4289,7 +4289,7 @@ push_member_name (tree decl)
/* Push an array bounds on the stack. Printed as [BOUNDS]. */
static void
push_array_bounds (int bounds)
push_array_bounds (unsigned HOST_WIDE_INT bounds)
{
PUSH_SPELLING (SPELLING_BOUNDS, bounds, u.i);
}
@ -4324,7 +4324,7 @@ print_spelling (char *buffer)
for (p = spelling_base; p < spelling; p++)
if (p->kind == SPELLING_BOUNDS)
{
sprintf (d, "[%d]", p->u.i);
sprintf (d, "[" HOST_WIDE_INT_PRINT_UNSIGNED "]", p->u.i);
d += strlen (d);
}
else
@ -4415,6 +4415,7 @@ digest_init (tree type, tree init, bool strict_string, int require_constant)
tree inside_init = init;
if (type == error_mark_node
|| !init
|| init == error_mark_node
|| TREE_TYPE (init) == error_mark_node)
return error_mark_node;
@ -5002,7 +5003,7 @@ really_start_incremental_init (tree type)
/* Vectors are like simple fixed-size arrays. */
constructor_max_index =
build_int_cst (NULL_TREE, TYPE_VECTOR_SUBPARTS (constructor_type) - 1);
constructor_index = convert (bitsizetype, bitsize_zero_node);
constructor_index = bitsize_zero_node;
constructor_unfilled_index = constructor_index;
}
else
@ -5119,7 +5120,7 @@ push_init_level (int implicit)
else if (TREE_CODE (constructor_type) == ARRAY_TYPE)
{
constructor_type = TREE_TYPE (constructor_type);
push_array_bounds (tree_low_cst (constructor_index, 0));
push_array_bounds (tree_low_cst (constructor_index, 1));
constructor_depth++;
}
@ -6514,7 +6515,7 @@ process_init_element (struct c_expr value)
/* Now output the actual element. */
if (value.value)
{
push_array_bounds (tree_low_cst (constructor_index, 0));
push_array_bounds (tree_low_cst (constructor_index, 1));
output_init_element (value.value, strict_string,
elttype, constructor_index, 1);
RESTORE_SPELLING_DEPTH (constructor_depth);

View File

@ -1,7 +1,8 @@
/* Medium-level subroutines: convert bit-field store and extract
and shifts, multiplies and divides to rtl instructions.
Copyright (C) 1987, 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998,
1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
Free Software Foundation, Inc.
This file is part of GCC.
@ -352,7 +353,25 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
meaningful at a much higher level; when structures are copied
between memory and regs, the higher-numbered regs
always get higher addresses. */
bitnum += SUBREG_BYTE (op0) * BITS_PER_UNIT;
int inner_mode_size = GET_MODE_SIZE (GET_MODE (SUBREG_REG (op0)));
int outer_mode_size = GET_MODE_SIZE (GET_MODE (op0));
byte_offset = 0;
/* Paradoxical subregs need special handling on big endian machines. */
if (SUBREG_BYTE (op0) == 0 && inner_mode_size < outer_mode_size)
{
int difference = inner_mode_size - outer_mode_size;
if (WORDS_BIG_ENDIAN)
byte_offset += (difference / UNITS_PER_WORD) * UNITS_PER_WORD;
if (BYTES_BIG_ENDIAN)
byte_offset += difference % UNITS_PER_WORD;
}
else
byte_offset = SUBREG_BYTE (op0);
bitnum += byte_offset * BITS_PER_UNIT;
op0 = SUBREG_REG (op0);
}
@ -608,7 +627,9 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_INT bitsize,
&& bitsize > 0
&& GET_MODE_BITSIZE (op_mode) >= bitsize
&& ! ((REG_P (op0) || GET_CODE (op0) == SUBREG)
&& (bitsize + bitpos > GET_MODE_BITSIZE (op_mode))))
&& (bitsize + bitpos > GET_MODE_BITSIZE (op_mode)))
&& insn_data[CODE_FOR_insv].operand[1].predicate (GEN_INT (bitsize),
VOIDmode))
{
int xbitpos = bitpos;
rtx value1;
@ -763,7 +784,7 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT offset,
{
enum machine_mode mode;
unsigned int total_bits = BITS_PER_WORD;
rtx subtarget, temp;
rtx temp;
int all_zero = 0;
int all_one = 0;
@ -889,24 +910,28 @@ store_fixed_bit_field (rtx op0, unsigned HOST_WIDE_INT offset,
/* Now clear the chosen bits in OP0,
except that if VALUE is -1 we need not bother. */
/* We keep the intermediates in registers to allow CSE to combine
consecutive bitfield assignments. */
subtarget = op0;
temp = force_reg (mode, op0);
if (! all_one)
{
temp = expand_binop (mode, and_optab, op0,
temp = expand_binop (mode, and_optab, temp,
mask_rtx (mode, bitpos, bitsize, 1),
subtarget, 1, OPTAB_LIB_WIDEN);
subtarget = temp;
NULL_RTX, 1, OPTAB_LIB_WIDEN);
temp = force_reg (mode, temp);
}
else
temp = op0;
/* Now logical-or VALUE into OP0, unless it is zero. */
if (! all_zero)
temp = expand_binop (mode, ior_optab, temp, value,
subtarget, 1, OPTAB_LIB_WIDEN);
{
temp = expand_binop (mode, ior_optab, temp, value,
NULL_RTX, 1, OPTAB_LIB_WIDEN);
temp = force_reg (mode, temp);
}
if (op0 != temp)
emit_move_insn (op0, temp);
}

View File

@ -5974,6 +5974,19 @@ safe_from_p (rtx x, tree exp, int top_p)
return safe_from_p (x, exp, 0);
}
}
else if (TREE_CODE (exp) == CONSTRUCTOR)
{
constructor_elt *ce;
unsigned HOST_WIDE_INT idx;
for (idx = 0;
VEC_iterate (constructor_elt, CONSTRUCTOR_ELTS (exp), idx, ce);
idx++)
if ((ce->index != NULL_TREE && !safe_from_p (x, ce->index, 0))
|| !safe_from_p (x, ce->value, 0))
return 0;
return 1;
}
else if (TREE_CODE (exp) == ERROR_MARK)
return 1; /* An already-visited SAVE_EXPR? */
else

View File

@ -1668,6 +1668,9 @@ size_binop (enum tree_code code, tree arg0, tree arg1)
{
tree type = TREE_TYPE (arg0);
if (arg0 == error_mark_node || arg1 == error_mark_node)
return error_mark_node;
gcc_assert (TREE_CODE (type) == INTEGER_TYPE && TYPE_IS_SIZETYPE (type)
&& type == TREE_TYPE (arg1));
@ -1687,9 +1690,6 @@ size_binop (enum tree_code code, tree arg0, tree arg1)
return int_const_binop (code, arg0, arg1, 0);
}
if (arg0 == error_mark_node || arg1 == error_mark_node)
return error_mark_node;
return fold_build2 (code, type, arg0, arg1);
}
@ -4935,10 +4935,10 @@ fold_truthop (enum tree_code code, tree truth_type, tree lhs, tree rhs)
l_const = fold_convert (lntype, l_const);
l_const = unextend (l_const, ll_bitsize, ll_unsignedp, ll_and_mask);
l_const = const_binop (LSHIFT_EXPR, l_const, size_int (xll_bitpos), 0);
if (! integer_zerop (const_binop (BIT_AND_EXPR, l_const,
fold_build1 (BIT_NOT_EXPR,
lntype, ll_mask),
0)))
if (integer_nonzerop (const_binop (BIT_AND_EXPR, l_const,
fold_build1 (BIT_NOT_EXPR,
lntype, ll_mask),
0)))
{
warning (0, "comparison is always %d", wanted_code == NE_EXPR);
@ -4950,10 +4950,10 @@ fold_truthop (enum tree_code code, tree truth_type, tree lhs, tree rhs)
r_const = fold_convert (lntype, r_const);
r_const = unextend (r_const, rl_bitsize, rl_unsignedp, rl_and_mask);
r_const = const_binop (LSHIFT_EXPR, r_const, size_int (xrl_bitpos), 0);
if (! integer_zerop (const_binop (BIT_AND_EXPR, r_const,
fold_build1 (BIT_NOT_EXPR,
lntype, rl_mask),
0)))
if (integer_nonzerop (const_binop (BIT_AND_EXPR, r_const,
fold_build1 (BIT_NOT_EXPR,
lntype, rl_mask),
0)))
{
warning (0, "comparison is always %d", wanted_code == NE_EXPR);
@ -8548,7 +8548,7 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
return NULL_TREE;
/* Turn (a OP c1) OP c2 into a OP (c1+c2). */
if (TREE_CODE (arg0) == code && host_integerp (arg1, false)
if (TREE_CODE (op0) == code && host_integerp (arg1, false)
&& TREE_INT_CST_LOW (arg1) < TYPE_PRECISION (type)
&& host_integerp (TREE_OPERAND (arg0, 1), false)
&& TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < TYPE_PRECISION (type))
@ -10073,8 +10073,10 @@ fold_ternary (enum tree_code code, tree type, tree op0, tree op1, tree op2)
&& integer_zerop (TREE_OPERAND (arg0, 1))
&& integer_zerop (op2)
&& (tem = sign_bit_p (TREE_OPERAND (arg0, 0), arg1)))
return fold_convert (type, fold_build2 (BIT_AND_EXPR,
TREE_TYPE (tem), tem, arg1));
return fold_convert (type,
fold_build2 (BIT_AND_EXPR,
TREE_TYPE (tem), tem,
fold_convert (TREE_TYPE (tem), arg1)));
/* (A >> N) & 1 ? (1 << N) : 0 is simply A & (1 << N). A & 1 was
already handled above. */
@ -10111,7 +10113,9 @@ fold_ternary (enum tree_code code, tree type, tree op0, tree op1, tree op2)
if (integer_zerop (op2)
&& truth_value_p (TREE_CODE (arg0))
&& truth_value_p (TREE_CODE (arg1)))
return fold_build2 (TRUTH_ANDIF_EXPR, type, arg0, arg1);
return fold_build2 (TRUTH_ANDIF_EXPR, type,
fold_convert (type, arg0),
arg1);
/* Convert A ? B : 1 into !A || B if A and B are truth values. */
if (integer_onep (op2)
@ -10121,7 +10125,9 @@ fold_ternary (enum tree_code code, tree type, tree op0, tree op1, tree op2)
/* Only perform transformation if ARG0 is easily inverted. */
tem = invert_truthvalue (arg0);
if (TREE_CODE (tem) != TRUTH_NOT_EXPR)
return fold_build2 (TRUTH_ORIF_EXPR, type, tem, arg1);
return fold_build2 (TRUTH_ORIF_EXPR, type,
fold_convert (type, tem),
arg1);
}
/* Convert A ? 0 : B into !A && B if A and B are truth values. */
@ -10132,14 +10138,18 @@ fold_ternary (enum tree_code code, tree type, tree op0, tree op1, tree op2)
/* Only perform transformation if ARG0 is easily inverted. */
tem = invert_truthvalue (arg0);
if (TREE_CODE (tem) != TRUTH_NOT_EXPR)
return fold_build2 (TRUTH_ANDIF_EXPR, type, tem, op2);
return fold_build2 (TRUTH_ANDIF_EXPR, type,
fold_convert (type, tem),
op2);
}
/* Convert A ? 1 : B into A || B if A and B are truth values. */
if (integer_onep (arg1)
&& truth_value_p (TREE_CODE (arg0))
&& truth_value_p (TREE_CODE (op2)))
return fold_build2 (TRUTH_ORIF_EXPR, type, arg0, op2);
return fold_build2 (TRUTH_ORIF_EXPR, type,
fold_convert (type, arg0),
op2);
return NULL_TREE;
@ -10786,6 +10796,9 @@ multiple_of_p (tree type, tree top, tree bottom)
int
tree_expr_nonnegative_p (tree t)
{
if (t == error_mark_node)
return 0;
if (TYPE_UNSIGNED (TREE_TYPE (t)))
return 1;

View File

@ -702,47 +702,76 @@ noce_emit_move_insn (rtx x, rtx y)
end_sequence();
if (recog_memoized (insn) <= 0)
switch (GET_RTX_CLASS (GET_CODE (y)))
{
case RTX_UNARY:
ot = code_to_optab[GET_CODE (y)];
if (ot)
{
start_sequence ();
target = expand_unop (GET_MODE (y), ot, XEXP (y, 0), x, 0);
if (target != NULL_RTX)
{
if (target != x)
emit_move_insn (x, target);
seq = get_insns ();
}
end_sequence ();
}
break;
{
if (GET_CODE (x) == ZERO_EXTRACT)
{
rtx op = XEXP (x, 0);
unsigned HOST_WIDE_INT size = INTVAL (XEXP (x, 1));
unsigned HOST_WIDE_INT start = INTVAL (XEXP (x, 2));
case RTX_BIN_ARITH:
case RTX_COMM_ARITH:
ot = code_to_optab[GET_CODE (y)];
if (ot)
{
start_sequence ();
target = expand_binop (GET_MODE (y), ot,
XEXP (y, 0), XEXP (y, 1),
x, 0, OPTAB_DIRECT);
if (target != NULL_RTX)
{
if (target != x)
emit_move_insn (x, target);
seq = get_insns ();
}
end_sequence ();
}
break;
/* store_bit_field expects START to be relative to
BYTES_BIG_ENDIAN and adjusts this value for machines with
BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN. In order to be able to
invoke store_bit_field again it is necessary to have the START
value from the first call. */
if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN)
{
if (MEM_P (op))
start = BITS_PER_UNIT - start - size;
else
{
gcc_assert (REG_P (op));
start = BITS_PER_WORD - start - size;
}
}
default:
break;
}
gcc_assert (start < (MEM_P (op) ? BITS_PER_UNIT : BITS_PER_WORD));
store_bit_field (op, size, start, GET_MODE (x), y);
return;
}
switch (GET_RTX_CLASS (GET_CODE (y)))
{
case RTX_UNARY:
ot = code_to_optab[GET_CODE (y)];
if (ot)
{
start_sequence ();
target = expand_unop (GET_MODE (y), ot, XEXP (y, 0), x, 0);
if (target != NULL_RTX)
{
if (target != x)
emit_move_insn (x, target);
seq = get_insns ();
}
end_sequence ();
}
break;
case RTX_BIN_ARITH:
case RTX_COMM_ARITH:
ot = code_to_optab[GET_CODE (y)];
if (ot)
{
start_sequence ();
target = expand_binop (GET_MODE (y), ot,
XEXP (y, 0), XEXP (y, 1),
x, 0, OPTAB_DIRECT);
if (target != NULL_RTX)
{
if (target != x)
emit_move_insn (x, target);
seq = get_insns ();
}
end_sequence ();
}
break;
default:
break;
}
}
emit_insn (seq);
return;
}
@ -2207,6 +2236,12 @@ noce_process_if_block (struct ce_if_block * ce_info)
{
if (no_new_pseudos || GET_MODE (x) == BLKmode)
return FALSE;
if (GET_MODE (x) == ZERO_EXTRACT
&& (GET_CODE (XEXP (x, 1)) != CONST_INT
|| GET_CODE (XEXP (x, 2)) != CONST_INT))
return FALSE;
x = gen_reg_rtx (GET_MODE (GET_CODE (x) == STRICT_LOW_PART
? XEXP (x, 0) : x));
}

View File

@ -237,22 +237,15 @@ loop_exit_at_end_p (struct loop *loop)
static void
peel_loops_completely (struct loops *loops, int flags)
{
struct loop *loop, *next;
struct loop *loop;
unsigned i;
loop = loops->tree_root;
while (loop->inner)
loop = loop->inner;
while (loop != loops->tree_root)
/* Scan the loops, the inner ones first. */
for (i = loops->num - 1; i > 0; i--)
{
if (loop->next)
{
next = loop->next;
while (next->inner)
next = next->inner;
}
else
next = loop->outer;
loop = loops->parray[i];
if (!loop)
continue;
loop->lpt_decision.decision = LPT_NONE;
@ -275,7 +268,6 @@ peel_loops_completely (struct loops *loops, int flags)
verify_loop_structure (loops);
#endif
}
loop = next;
}
}

View File

@ -1329,7 +1329,9 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
#ifdef SECONDARY_MEMORY_NEEDED
/* If a memory location is needed for the copy, make one. */
if (in != 0 && (REG_P (in) || GET_CODE (in) == SUBREG)
if (in != 0
&& (REG_P (in)
|| (GET_CODE (in) == SUBREG && REG_P (SUBREG_REG (in))))
&& reg_or_subregno (in) < FIRST_PSEUDO_REGISTER
&& SECONDARY_MEMORY_NEEDED (REGNO_REG_CLASS (reg_or_subregno (in)),
class, inmode))
@ -1359,7 +1361,9 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
n_reloads++;
#ifdef SECONDARY_MEMORY_NEEDED
if (out != 0 && (REG_P (out) || GET_CODE (out) == SUBREG)
if (out != 0
&& (REG_P (out)
|| (GET_CODE (out) == SUBREG && REG_P (SUBREG_REG (out))))
&& reg_or_subregno (out) < FIRST_PSEUDO_REGISTER
&& SECONDARY_MEMORY_NEEDED (class,
REGNO_REG_CLASS (reg_or_subregno (out)),
@ -4586,20 +4590,24 @@ find_reloads_toplev (rtx x, int opnum, enum reload_type type,
rtx tem;
if (subreg_lowpart_p (x)
&& regno >= FIRST_PSEUDO_REGISTER && reg_renumber[regno] < 0
&& regno >= FIRST_PSEUDO_REGISTER
&& reg_renumber[regno] < 0
&& reg_equiv_constant[regno] != 0
&& (tem = gen_lowpart_common (GET_MODE (x),
reg_equiv_constant[regno])) != 0)
reg_equiv_constant[regno])) != 0
&& LEGITIMATE_CONSTANT_P (tem))
return tem;
if (regno >= FIRST_PSEUDO_REGISTER && reg_renumber[regno] < 0
if (regno >= FIRST_PSEUDO_REGISTER
&& reg_renumber[regno] < 0
&& reg_equiv_constant[regno] != 0)
{
tem =
simplify_gen_subreg (GET_MODE (x), reg_equiv_constant[regno],
GET_MODE (SUBREG_REG (x)), SUBREG_BYTE (x));
gcc_assert (tem);
return tem;
if (LEGITIMATE_CONSTANT_P (tem))
return tem;
}
/* If the subreg contains a reg that will be converted to a mem,
@ -4615,7 +4623,7 @@ find_reloads_toplev (rtx x, int opnum, enum reload_type type,
a wider mode if we have a paradoxical SUBREG. find_reloads will
force a reload in that case. So we should not do anything here. */
else if (regno >= FIRST_PSEUDO_REGISTER
if (regno >= FIRST_PSEUDO_REGISTER
#ifdef LOAD_EXTEND_OP
&& (GET_MODE_SIZE (GET_MODE (x))
<= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))

View File

@ -658,6 +658,10 @@ update_alignment_for_field (record_layout_info rli, tree field,
bool user_align;
bool is_bitfield;
/* Do not attempt to align an ERROR_MARK node */
if (TREE_CODE (type) == ERROR_MARK)
return 0;
/* Lay out the field so we know what alignment it needs. */
layout_decl (field, known_align);
desired_align = DECL_ALIGN (field);
@ -770,6 +774,12 @@ place_union_field (record_layout_info rli, tree field)
DECL_FIELD_BIT_OFFSET (field) = bitsize_zero_node;
SET_DECL_OFFSET_ALIGN (field, BIGGEST_ALIGNMENT);
/* If this is an ERROR_MARK return *after* having set the
field at the start of the union. This helps when parsing
invalid fields. */
if (TREE_CODE (TREE_TYPE (field)) == ERROR_MARK)
return;
/* We assume the union's size will be a multiple of a byte so we don't
bother with BITPOS. */
if (TREE_CODE (rli->t) == UNION_TYPE)
@ -818,17 +828,6 @@ place_field (record_layout_info rli, tree field)
gcc_assert (TREE_CODE (field) != ERROR_MARK);
if (TREE_CODE (type) == ERROR_MARK)
{
if (TREE_CODE (field) == FIELD_DECL)
{
DECL_FIELD_OFFSET (field) = size_int (0);
DECL_FIELD_BIT_OFFSET (field) = bitsize_int (0);
}
return;
}
/* If FIELD is static, then treat it like a separate variable, not
really like a structure field. If it is a FUNCTION_DECL, it's a
method. In both cases, all we do is lay out the decl, and we do
@ -853,6 +852,16 @@ place_field (record_layout_info rli, tree field)
return;
}
else if (TREE_CODE (type) == ERROR_MARK)
{
/* Place this field at the current allocation position, so we
maintain monotonicity. */
DECL_FIELD_OFFSET (field) = rli->offset;
DECL_FIELD_BIT_OFFSET (field) = rli->bitpos;
SET_DECL_OFFSET_ALIGN (field, rli->offset_align);
return;
}
/* Work out the known alignment so far. Note that A & (-A) is the
value of the least-significant bit in A that is one. */
if (! integer_zerop (rli->bitpos))

View File

@ -652,6 +652,8 @@ update_parameter_components (void)
type = TREE_TYPE (type);
ssa_name = default_def (parm);
if (!ssa_name)
continue;
r = build1 (REALPART_EXPR, type, ssa_name);
i = build1 (IMAGPART_EXPR, type, ssa_name);

View File

@ -65,19 +65,11 @@ struct dfa_stats_d
};
/* State information for find_vars_r. */
struct walk_state
{
/* Hash table used to avoid adding the same variable more than once. */
htab_t vars_found;
};
/* Local functions. */
static void collect_dfa_stats (struct dfa_stats_d *);
static tree collect_dfa_stats_r (tree *, int *, void *);
static tree find_vars_r (tree *, int *, void *);
static void add_referenced_var (tree, struct walk_state *);
static void add_referenced_var (tree, bool);
/* Global declarations. */
@ -100,23 +92,16 @@ htab_t referenced_vars;
static void
find_referenced_vars (void)
{
htab_t vars_found;
basic_block bb;
block_stmt_iterator si;
struct walk_state walk_state;
vars_found = htab_create (50, htab_hash_pointer, htab_eq_pointer, NULL);
memset (&walk_state, 0, sizeof (walk_state));
walk_state.vars_found = vars_found;
FOR_EACH_BB (bb)
for (si = bsi_start (bb); !bsi_end_p (si); bsi_next (&si))
{
tree *stmt_p = bsi_stmt_ptr (si);
walk_tree (stmt_p, find_vars_r, &walk_state, NULL);
walk_tree (stmt_p, find_vars_r, NULL, NULL);
}
htab_delete (vars_found);
}
struct tree_opt_pass pass_referenced_vars =
@ -551,14 +536,12 @@ collect_dfa_stats_r (tree *tp, int *walk_subtrees ATTRIBUTE_UNUSED,
the function. */
static tree
find_vars_r (tree *tp, int *walk_subtrees, void *data)
find_vars_r (tree *tp, int *walk_subtrees, void *data ATTRIBUTE_UNUSED)
{
struct walk_state *walk_state = (struct walk_state *) data;
/* If T is a regular variable that the optimizers are interested
in, add it to the list of variables. */
if (SSA_VAR_P (*tp))
add_referenced_var (*tp, walk_state);
add_referenced_var (*tp, false);
/* Type, _DECL and constant nodes have no interesting children.
Ignore them. */
@ -610,6 +593,9 @@ referenced_var_insert (unsigned int uid, tree to)
h->uid = uid;
h->to = to;
loc = htab_find_slot_with_hash (referenced_vars, h, uid, INSERT);
/* This assert can only trigger if a variable with the same UID has been
inserted already. */
gcc_assert ((*(struct int_tree_map **)loc) == NULL);
*(struct int_tree_map **) loc = h;
}
@ -621,25 +607,18 @@ referenced_var_insert (unsigned int uid, tree to)
duplicate checking is done. */
static void
add_referenced_var (tree var, struct walk_state *walk_state)
add_referenced_var (tree var, bool always)
{
void **slot;
var_ann_t v_ann;
v_ann = get_var_ann (var);
if (walk_state)
slot = htab_find_slot (walk_state->vars_found, (void *) var, INSERT);
else
slot = NULL;
if (slot == NULL || *slot == NULL)
gcc_assert (DECL_P (var));
if (always || referenced_var_lookup_if_exists (DECL_UID (var)) == NULL_TREE)
{
/* This is the first time we find this variable, add it to the
REFERENCED_VARS array and annotate it with attributes that are
intrinsic to the variable. */
if (slot)
*slot = (void *) var;
referenced_var_insert (DECL_UID (var), var);
@ -658,7 +637,7 @@ add_referenced_var (tree var, struct walk_state *walk_state)
variables because it cannot be propagated by the
optimizers. */
&& (TREE_CONSTANT (var) || TREE_READONLY (var)))
walk_tree (&DECL_INITIAL (var), find_vars_r, walk_state, 0);
walk_tree (&DECL_INITIAL (var), find_vars_r, NULL, 0);
}
}
@ -695,7 +674,7 @@ get_virtual_var (tree var)
void
add_referenced_tmp_var (tree var)
{
add_referenced_var (var, NULL);
add_referenced_var (var, true);
}

View File

@ -1226,6 +1226,10 @@ mudflap_finish_file (void)
{
tree ctor_statements = NULL_TREE;
/* No need to continue when there were errors. */
if (errorcount != 0 || sorrycount != 0)
return;
/* Insert a call to __mf_init. */
{
tree call2_stmt = build_function_call_expr (mf_init_fndecl, NULL_TREE);

View File

@ -2783,7 +2783,11 @@ scev_const_prop (void)
def = analyze_scalar_evolution_in_loop (ex_loop, loop, def, NULL);
def = compute_overall_effect_of_inner_loop (ex_loop, def);
if (!tree_does_not_contain_chrecs (def)
|| chrec_contains_symbols_defined_in_loop (def, ex_loop->num))
|| chrec_contains_symbols_defined_in_loop (def, ex_loop->num)
/* Moving the computation from the loop may prolong life range
of some ssa names, which may cause problems if they appear
on abnormal edges. */
|| contains_abnormal_ssa_name_p (def))
continue;
/* Eliminate the phi node and replace it by a computation outside

View File

@ -659,6 +659,86 @@ stmt_after_increment (struct loop *loop, struct iv_cand *cand, tree stmt)
}
}
/* Returns true if EXP is a ssa name that occurs in an abnormal phi node. */
static bool
abnormal_ssa_name_p (tree exp)
{
if (!exp)
return false;
if (TREE_CODE (exp) != SSA_NAME)
return false;
return SSA_NAME_OCCURS_IN_ABNORMAL_PHI (exp) != 0;
}
/* Returns false if BASE or INDEX contains a ssa name that occurs in an
abnormal phi node. Callback for for_each_index. */
static bool
idx_contains_abnormal_ssa_name_p (tree base, tree *index,
void *data ATTRIBUTE_UNUSED)
{
if (TREE_CODE (base) == ARRAY_REF)
{
if (abnormal_ssa_name_p (TREE_OPERAND (base, 2)))
return false;
if (abnormal_ssa_name_p (TREE_OPERAND (base, 3)))
return false;
}
return !abnormal_ssa_name_p (*index);
}
/* Returns true if EXPR contains a ssa name that occurs in an
abnormal phi node. */
bool
contains_abnormal_ssa_name_p (tree expr)
{
enum tree_code code;
enum tree_code_class class;
if (!expr)
return false;
code = TREE_CODE (expr);
class = TREE_CODE_CLASS (code);
if (code == SSA_NAME)
return SSA_NAME_OCCURS_IN_ABNORMAL_PHI (expr) != 0;
if (code == INTEGER_CST
|| is_gimple_min_invariant (expr))
return false;
if (code == ADDR_EXPR)
return !for_each_index (&TREE_OPERAND (expr, 0),
idx_contains_abnormal_ssa_name_p,
NULL);
switch (class)
{
case tcc_binary:
case tcc_comparison:
if (contains_abnormal_ssa_name_p (TREE_OPERAND (expr, 1)))
return true;
/* Fallthru. */
case tcc_unary:
if (contains_abnormal_ssa_name_p (TREE_OPERAND (expr, 0)))
return true;
break;
default:
gcc_unreachable ();
}
return false;
}
/* Element of the table in that we cache the numbers of iterations obtained
from exits of the loop. */
@ -667,11 +747,9 @@ struct nfe_cache_elt
/* The edge for that the number of iterations is cached. */
edge exit;
/* True if the # of iterations was successfully determined. */
bool valid_p;
/* Description of # of iterations. */
struct tree_niter_desc niter;
/* Number of iterations corresponding to this exit, or NULL if it cannot be
determined. */
tree niter;
};
/* Hash function for nfe_cache_elt E. */
@ -694,13 +772,14 @@ nfe_eq (const void *e1, const void *e2)
return elt1->exit == e2;
}
/* Returns structure describing number of iterations determined from
/* Returns tree describing number of iterations determined from
EXIT of DATA->current_loop, or NULL if something goes wrong. */
static struct tree_niter_desc *
static tree
niter_for_exit (struct ivopts_data *data, edge exit)
{
struct nfe_cache_elt *nfe_desc;
struct tree_niter_desc desc;
PTR *slot;
slot = htab_find_slot_with_hash (data->niters, exit,
@ -711,25 +790,31 @@ niter_for_exit (struct ivopts_data *data, edge exit)
{
nfe_desc = xmalloc (sizeof (struct nfe_cache_elt));
nfe_desc->exit = exit;
nfe_desc->valid_p = number_of_iterations_exit (data->current_loop,
exit, &nfe_desc->niter,
true);
*slot = nfe_desc;
/* Try to determine number of iterations. We must know it
unconditionally (i.e., without possibility of # of iterations
being zero). Also, we cannot safely work with ssa names that
appear in phi nodes on abnormal edges, so that we do not create
overlapping life ranges for them (PR 27283). */
if (number_of_iterations_exit (data->current_loop,
exit, &desc, true)
&& zero_p (desc.may_be_zero)
&& !contains_abnormal_ssa_name_p (desc.niter))
nfe_desc->niter = desc.niter;
else
nfe_desc->niter = NULL_TREE;
}
else
nfe_desc = *slot;
if (!nfe_desc->valid_p)
return NULL;
return &nfe_desc->niter;
return nfe_desc->niter;
}
/* Returns structure describing number of iterations determined from
/* Returns tree describing number of iterations determined from
single dominating exit of DATA->current_loop, or NULL if something
goes wrong. */
static struct tree_niter_desc *
static tree
niter_for_single_dom_exit (struct ivopts_data *data)
{
edge exit = single_dom_exit (data->current_loop);
@ -893,86 +978,6 @@ determine_biv_step (tree phi)
return (zero_p (iv.step) ? NULL_TREE : iv.step);
}
/* Returns true if EXP is a ssa name that occurs in an abnormal phi node. */
static bool
abnormal_ssa_name_p (tree exp)
{
if (!exp)
return false;
if (TREE_CODE (exp) != SSA_NAME)
return false;
return SSA_NAME_OCCURS_IN_ABNORMAL_PHI (exp) != 0;
}
/* Returns false if BASE or INDEX contains a ssa name that occurs in an
abnormal phi node. Callback for for_each_index. */
static bool
idx_contains_abnormal_ssa_name_p (tree base, tree *index,
void *data ATTRIBUTE_UNUSED)
{
if (TREE_CODE (base) == ARRAY_REF)
{
if (abnormal_ssa_name_p (TREE_OPERAND (base, 2)))
return false;
if (abnormal_ssa_name_p (TREE_OPERAND (base, 3)))
return false;
}
return !abnormal_ssa_name_p (*index);
}
/* Returns true if EXPR contains a ssa name that occurs in an
abnormal phi node. */
static bool
contains_abnormal_ssa_name_p (tree expr)
{
enum tree_code code;
enum tree_code_class class;
if (!expr)
return false;
code = TREE_CODE (expr);
class = TREE_CODE_CLASS (code);
if (code == SSA_NAME)
return SSA_NAME_OCCURS_IN_ABNORMAL_PHI (expr) != 0;
if (code == INTEGER_CST
|| is_gimple_min_invariant (expr))
return false;
if (code == ADDR_EXPR)
return !for_each_index (&TREE_OPERAND (expr, 0),
idx_contains_abnormal_ssa_name_p,
NULL);
switch (class)
{
case tcc_binary:
case tcc_comparison:
if (contains_abnormal_ssa_name_p (TREE_OPERAND (expr, 1)))
return true;
/* Fallthru. */
case tcc_unary:
if (contains_abnormal_ssa_name_p (TREE_OPERAND (expr, 0)))
return true;
break;
default:
gcc_unreachable ();
}
return false;
}
/* Finds basic ivs. */
static bool
@ -1126,20 +1131,13 @@ find_induction_variables (struct ivopts_data *data)
if (dump_file && (dump_flags & TDF_DETAILS))
{
struct tree_niter_desc *niter;
niter = niter_for_single_dom_exit (data);
tree niter = niter_for_single_dom_exit (data);
if (niter)
{
fprintf (dump_file, " number of iterations ");
print_generic_expr (dump_file, niter->niter, TDF_SLIM);
fprintf (dump_file, "\n");
fprintf (dump_file, " may be zero if ");
print_generic_expr (dump_file, niter->may_be_zero, TDF_SLIM);
fprintf (dump_file, "\n");
fprintf (dump_file, "\n");
print_generic_expr (dump_file, niter, TDF_SLIM);
fprintf (dump_file, "\n\n");
};
fprintf (dump_file, "Induction variables:\n\n");
@ -2216,12 +2214,8 @@ add_iv_value_candidates (struct ivopts_data *data,
static void
add_iv_outer_candidates (struct ivopts_data *data, struct iv_use *use)
{
struct tree_niter_desc *niter;
/* We must know where we exit the loop and how many times does it roll. */
niter = niter_for_single_dom_exit (data);
if (!niter
|| !zero_p (niter->may_be_zero))
if (! niter_for_single_dom_exit (data))
return;
add_candidate_1 (data, NULL, NULL, false, IP_NORMAL, use, NULL_TREE);
@ -4012,7 +4006,6 @@ may_eliminate_iv (struct ivopts_data *data,
{
basic_block ex_bb;
edge exit;
struct tree_niter_desc *niter;
tree nit, nit_type;
tree wider_type, period, per_type;
struct loop *loop = data->current_loop;
@ -4035,12 +4028,10 @@ may_eliminate_iv (struct ivopts_data *data,
if (flow_bb_inside_loop_p (loop, exit->dest))
return false;
niter = niter_for_exit (data, exit);
if (!niter
|| !zero_p (niter->may_be_zero))
nit = niter_for_exit (data, exit);
if (!nit)
return false;
nit = niter->niter;
nit_type = TREE_TYPE (nit);
/* Determine whether we may use the variable to test whether niter iterations
@ -4123,7 +4114,7 @@ may_replace_final_value (struct ivopts_data *data, struct iv_use *use,
{
struct loop *loop = data->current_loop;
edge exit;
struct tree_niter_desc *niter;
tree nit;
exit = single_dom_exit (loop);
if (!exit)
@ -4132,12 +4123,11 @@ may_replace_final_value (struct ivopts_data *data, struct iv_use *use,
gcc_assert (dominated_by_p (CDI_DOMINATORS, exit->src,
bb_for_stmt (use->stmt)));
niter = niter_for_single_dom_exit (data);
if (!niter
|| !zero_p (niter->may_be_zero))
nit = niter_for_single_dom_exit (data);
if (!nit)
return false;
*value = iv_value (use->iv, niter->niter);
*value = iv_value (use->iv, nit);
return true;
}

View File

@ -129,9 +129,9 @@ inverse (tree x, tree mask)
/* Determines number of iterations of loop whose ending condition
is IV <> FINAL. TYPE is the type of the iv. The number of
iterations is stored to NITER. NEVER_INFINITE is true if
we know that the loop cannot be infinite (we derived this
earlier, and possibly set NITER->assumptions to make sure this
is the case. */
we know that the exit must be taken eventually, i.e., that the IV
ever reaches the value FINAL (we derived this earlier, and possibly set
NITER->assumptions to make sure this is the case). */
static bool
number_of_iterations_ne (tree type, affine_iv *iv, tree final,
@ -475,9 +475,9 @@ number_of_iterations_lt (tree type, affine_iv *iv0, affine_iv *iv1,
/* Determines number of iterations of loop whose ending condition
is IV0 <= IV1. TYPE is the type of the iv. The number of
iterations is stored to NITER. NEVER_INFINITE is true if
we know that the loop cannot be infinite (we derived this
we know that this condition must eventually become false (we derived this
earlier, and possibly set NITER->assumptions to make sure this
is the case. */
is the case). */
static bool
number_of_iterations_le (tree type, affine_iv *iv0, affine_iv *iv1,
@ -521,6 +521,11 @@ number_of_iterations_le (tree type, affine_iv *iv0, affine_iv *iv1,
is IV0, the right-hand side is IV1. Both induction variables must have
type TYPE, which must be an integer or pointer type. The steps of the
ivs must be constants (or NULL_TREE, which is interpreted as constant zero).
ONLY_EXIT is true if we are sure this is the only way the loop could be
exited (including possibly non-returning function calls, exceptions, etc.)
-- in this case we can use the information whether the control induction
variables can overflow or not in a more efficient way.
The results (number of iterations and assumptions as described in
comments at struct tree_niter_desc in tree-flow.h) are stored to NITER.
@ -529,7 +534,8 @@ number_of_iterations_le (tree type, affine_iv *iv0, affine_iv *iv1,
static bool
number_of_iterations_cond (tree type, affine_iv *iv0, enum tree_code code,
affine_iv *iv1, struct tree_niter_desc *niter)
affine_iv *iv1, struct tree_niter_desc *niter,
bool only_exit)
{
bool never_infinite;
@ -552,13 +558,30 @@ number_of_iterations_cond (tree type, affine_iv *iv0, enum tree_code code,
code = swap_tree_comparison (code);
}
if (!only_exit)
{
/* If this is not the only possible exit from the loop, the information
that the induction variables cannot overflow as derived from
signedness analysis cannot be relied upon. We use them e.g. in the
following way: given loop for (i = 0; i <= n; i++), if i is
signed, it cannot overflow, thus this loop is equivalent to
for (i = 0; i < n + 1; i++); however, if n == MAX, but the loop
is exited in some other way before i overflows, this transformation
is incorrect (the new loop exits immediately). */
iv0->no_overflow = false;
iv1->no_overflow = false;
}
if (POINTER_TYPE_P (type))
{
/* Comparison of pointers is undefined unless both iv0 and iv1 point
to the same object. If they do, the control variable cannot wrap
(as wrap around the bounds of memory will never return a pointer
that would be guaranteed to point to the same object, even if we
avoid undefined behavior by casting to size_t and back). */
avoid undefined behavior by casting to size_t and back). The
restrictions on pointer arithmetics and comparisons of pointers
ensure that using the no-overflow assumptions is correct in this
case even if ONLY_EXIT is false. */
iv0->no_overflow = true;
iv1->no_overflow = true;
}
@ -943,6 +966,37 @@ simplify_using_outer_evolutions (struct loop *loop, tree expr)
return expr;
}
/* Returns true if EXIT is the only possible exit from LOOP. */
static bool
loop_only_exit_p (struct loop *loop, edge exit)
{
basic_block *body;
block_stmt_iterator bsi;
unsigned i;
tree call;
if (exit != loop->single_exit)
return false;
body = get_loop_body (loop);
for (i = 0; i < loop->num_nodes; i++)
{
for (bsi = bsi_start (body[0]); !bsi_end_p (bsi); bsi_next (&bsi))
{
call = get_call_expr_in (bsi_stmt (bsi));
if (call && TREE_SIDE_EFFECTS (call))
{
free (body);
return false;
}
}
}
free (body);
return true;
}
/* Stores description of number of iterations of LOOP derived from
EXIT (an exit edge of the LOOP) in NITER. Returns true if some
useful information could be derived (and fields of NITER has
@ -1003,7 +1057,8 @@ number_of_iterations_exit (struct loop *loop, edge exit,
iv0.base = expand_simple_operations (iv0.base);
iv1.base = expand_simple_operations (iv1.base);
if (!number_of_iterations_cond (type, &iv0, code, &iv1, niter))
if (!number_of_iterations_cond (type, &iv0, code, &iv1, niter,
loop_only_exit_p (loop, exit)))
return false;
if (optimize >= 3)
@ -1226,7 +1281,7 @@ get_base_for (struct loop *loop, tree x)
/* Given an expression X, then
* if BASE is NULL_TREE, X must be a constant and we return X.
* if X is NULL_TREE, we return the constant BASE.
* otherwise X is a SSA name, whose value in the considered loop is derived
by a chain of operations with constant from a result of a phi node in
the header of the loop. Then we return value of X when the value of the
@ -1239,6 +1294,8 @@ get_val_for (tree x, tree base)
use_operand_p op;
ssa_op_iter iter;
gcc_assert (is_gimple_min_invariant (base));
if (!x)
return base;
@ -1339,7 +1396,11 @@ loop_niter_by_eval (struct loop *loop, edge exit)
}
for (j = 0; j < 2; j++)
val[j] = get_val_for (next[j], val[j]);
{
val[j] = get_val_for (next[j], val[j]);
if (!is_gimple_min_invariant (val[j]))
return chrec_dont_know;
}
}
return chrec_dont_know;
@ -1501,15 +1562,20 @@ infer_loop_bounds_from_undefined (struct loop *loop)
utype = unsigned_type_for (type);
if (tree_int_cst_lt (step, integer_zero_node))
diff = fold_build2 (MINUS_EXPR, utype, init,
diff = fold_build2 (MINUS_EXPR, type, init,
TYPE_MIN_VALUE (type));
else
diff = fold_build2 (MINUS_EXPR, utype,
diff = fold_build2 (MINUS_EXPR, type,
TYPE_MAX_VALUE (type), init);
estimation = fold_build2 (CEIL_DIV_EXPR, utype, diff,
step);
record_estimate (loop, estimation, boolean_true_node, stmt);
if (integer_nonzerop (step))
{
estimation = fold_build2 (CEIL_DIV_EXPR, type, diff,
step);
record_estimate (loop,
fold_convert (utype, estimation),
boolean_true_node, stmt);
}
}
break;

View File

@ -2125,7 +2125,8 @@ get_constraint_for_component_ref (tree t, bool *need_anyoffset)
ignore this constraint. When we handle pointer subtraction,
we may have to do something cute here. */
if (result.offset < get_varinfo (result.var)->fullsize)
if (result.offset < get_varinfo (result.var)->fullsize
&& bitsize != 0)
{
/* It's also not true that the constraint will actually start at the
right offset, it may start in some padding. We only care about
@ -2147,6 +2148,12 @@ get_constraint_for_component_ref (tree t, bool *need_anyoffset)
gcc_assert (curr);
}
else if (bitsize == 0)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "Access to zero-sized part of variable,"
"ignoring\n");
}
else
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "Access to past the end of variable, ignoring\n");
@ -2950,6 +2957,9 @@ find_func_aliases (tree t, struct alias_info *ai)
of the RHS. */
if (need_anyoffset || (rhs.type == ADDRESSOF
&& !(get_varinfo (rhs.var)->is_special_var)
&& (POINTER_TYPE_P (TREE_TYPE (anyoffsetrhs))
|| TREE_CODE (TREE_TYPE (anyoffsetrhs))
== ARRAY_TYPE)
&& AGGREGATE_TYPE_P (TREE_TYPE (TREE_TYPE (anyoffsetrhs)))))
{
rhs.var = anyoffset_id;

View File

@ -1006,7 +1006,7 @@ execute_tail_calls (void)
struct tree_opt_pass pass_tail_recursion =
{
"tailr", /* name */
NULL, /* gate */
gate_tail_calls, /* gate */
execute_tail_recursion, /* execute */
NULL, /* sub */
NULL, /* next */

View File

@ -1968,6 +1968,11 @@ vectorizable_condition (tree stmt, block_stmt_iterator *bsi, tree *vec_stmt)
then_clause = TREE_OPERAND (op, 1);
else_clause = TREE_OPERAND (op, 2);
/* We do not handle two different vector types for the condition
and the values. */
if (TREE_TYPE (TREE_OPERAND (cond_expr, 0)) != TREE_TYPE (vectype))
return false;
if (!vect_is_simple_cond (cond_expr, loop_vinfo))
return false;

View File

@ -1101,17 +1101,39 @@ vrp_int_const_binop (enum tree_code code, tree val1, tree val2)
if (TYPE_UNSIGNED (TREE_TYPE (val1)))
{
int checkz = compare_values (res, val1);
bool overflow = false;
/* Ensure that res = val1 [+*] val2 >= val1
or that res = val1 - val2 <= val1. */
if (((code == PLUS_EXPR || code == MULT_EXPR)
if ((code == PLUS_EXPR
&& !(checkz == 1 || checkz == 0))
|| (code == MINUS_EXPR
&& !(checkz == 0 || checkz == -1)))
{
overflow = true;
}
/* Checking for multiplication overflow is done by dividing the
output of the multiplication by the first input of the
multiplication. If the result of that division operation is
not equal to the second input of the multiplication, then the
multiplication overflowed. */
else if (code == MULT_EXPR && !integer_zerop (val1))
{
tree tmp = int_const_binop (TRUNC_DIV_EXPR,
TYPE_MAX_VALUE (TREE_TYPE (val1)),
val1, 0);
int check = compare_values (tmp, val2);
if (check != 0)
overflow = true;
}
if (overflow)
{
res = copy_node (res);
TREE_OVERFLOW (res) = 1;
}
}
else if (TREE_OVERFLOW (res)
&& !TREE_OVERFLOW (val1)

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.

View File

@ -1,3 +1,7 @@
2006-05-24 Release Manager
* GCC 4.1.1 released.
2006-02-28 Release Manager
* GCC 4.1.0 released.