Commit Graph

208 Commits

Author SHA1 Message Date
Chris Lattner
3d27be1333 s|llvm/Support/Visibility.h|llvm/Support/Compiler.h|
llvm-svn: 29911
2006-08-27 12:54:02 +00:00
Evan Cheng
c3acfc0b10 Do not use getTargetNode() and SelectNodeTo() which takes more than 3
SDOperand arguments. Use the variants which take an array and number instead.

llvm-svn: 29907
2006-08-27 08:14:06 +00:00
Evan Cheng
34b70eea5c SelectNodeTo now returns a SDNode*.
llvm-svn: 29901
2006-08-26 08:00:10 +00:00
Evan Cheng
61413a3d72 Select() no longer require Result operand by reference.
llvm-svn: 29898
2006-08-26 05:34:46 +00:00
Evan Cheng
ab8297f92d Match tblgen changes.
llvm-svn: 29895
2006-08-26 01:07:58 +00:00
Chris Lattner
bc485fdc4c Fix PowerPC/2006-08-15-SelectionCrash.ll and simplify selection code.
llvm-svn: 29715
2006-08-15 23:48:22 +00:00
Evan Cheng
bd1c5a8fb8 Match tablegen changes.
llvm-svn: 29604
2006-08-11 09:08:15 +00:00
Chris Lattner
c24a1d3093 Start eliminating temporary vectors used to create DAG nodes. Instead, pass
in the start of an array and a count of operands where applicable.  In many
cases, the number of operands is known, so this static array can be allocated
on the stack, avoiding the heap.  In many other cases, a SmallVector can be
used, which has the same benefit in the common cases.

I updated a lot of code calling getNode that takes a vector, but ran out of
time.  The rest of the code should be updated, and these methods should be
removed.

We should also do the same thing to eliminate the methods that take a
vector of MVT::ValueTypes.

It would be extra nice to convert the dagiselemitter to avoid creating vectors
for operands when calling getTargetNode.

llvm-svn: 29566
2006-08-08 02:23:42 +00:00
Evan Cheng
b9d34bd098 Match tablegen isel changes.
llvm-svn: 29549
2006-08-07 22:28:20 +00:00
Evan Cheng
b572401bea Remove InFlightSet hack. No longer needed.
llvm-svn: 29373
2006-07-28 00:47:19 +00:00
Evan Cheng
f300896420 Remove NodeDepth
llvm-svn: 29338
2006-07-27 06:40:15 +00:00
Chris Lattner
2f8c2d8ef2 shrink libllvmgcc.dylib another 25K
llvm-svn: 28971
2006-06-28 22:00:36 +00:00
Chris Lattner
ca9c488528 Don't match 64-bit bitfield inserts into rlwimi's. todo add rldimi. :)
llvm-svn: 28944
2006-06-27 21:08:52 +00:00
Chris Lattner
f882c54505 Fix ppc64 jump tables
llvm-svn: 28941
2006-06-27 20:46:17 +00:00
Chris Lattner
9a40cca40f Fix variable shadowing issue
llvm-svn: 28922
2006-06-27 00:10:13 +00:00
Chris Lattner
97b3da1519 Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (but
doesn't work right).

llvm-svn: 28921
2006-06-27 00:04:13 +00:00
Chris Lattner
b055c8737f Work around a nasty tblgen bug where it doesn't add operands for varargs
nodes correctly.

llvm-svn: 28745
2006-06-10 01:15:02 +00:00
Chris Lattner
1fbb0d38c7 Fix build failure of povray
llvm-svn: 28473
2006-05-25 18:06:16 +00:00
Chris Lattner
630bbcef8d Fix Benchmarks/MallocBench/cfrac
llvm-svn: 28471
2006-05-25 16:54:16 +00:00
Evan Cheng
4af59dac0b Assert if InflightSet is not cleared after instruction selecting a BB.
llvm-svn: 28459
2006-05-25 00:24:28 +00:00
Evan Cheng
1a8e74d113 Clear HandleMap and ReplaceMap after instruction selection. Or it may cause
non-deterministic behavior.

llvm-svn: 28454
2006-05-24 20:46:25 +00:00
Chris Lattner
eb755fc1b3 Make PPC call lowering more aggressive, making the isel matching code simple
enough to be autogenerated.

llvm-svn: 28354
2006-05-17 19:00:46 +00:00
Chris Lattner
b1e9e37c58 Switch PPC over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the PPCISD::CALL selection code create
them.  This vastly simplifies the selection code, and moves the ABI handling
parts into one place.

llvm-svn: 28346
2006-05-17 06:01:33 +00:00
Chris Lattner
f058f5aef1 implement passing/returning vector regs to calls, at least non-varargs calls.
llvm-svn: 28341
2006-05-16 23:54:25 +00:00
Chris Lattner
a296339c87 Fix PowerPC/2006-05-12-rlwimi-crash.ll
Nate, please verify that if InsertMask is 0, rlwimi shouldn't be used.
This fixes the crash and causes no PPC testsuite regressions.

llvm-svn: 28243
2006-05-12 16:29:37 +00:00
Nate Begeman
9b6d4c2968 Fold more shifts into inserts, and update the README
llvm-svn: 28168
2006-05-08 17:38:32 +00:00
Nate Begeman
dc996b3f6c Update some stuff now that the new rlwimi code has gone in
llvm-svn: 28162
2006-05-08 02:52:38 +00:00
Nate Begeman
1333cead5b New rlwimi implementation, which is superior to the old one. There are
still a couple missed optimizations, but we now generate all the possible
rlwimis for multiple inserts into the same bitfield.  More regression tests
to come.

llvm-svn: 28156
2006-05-07 00:23:38 +00:00
Nate Begeman
4ca2ea5b43 JumpTable support! What this represents is working asm and jit support for
x86 and ppc for 100% dense switch statements when relocations are non-PIC.
This support will be extended and enhanced in the coming days to support
PIC, and less dense forms of jump tables.

llvm-svn: 27947
2006-04-22 18:53:45 +00:00
Chris Lattner
0a3d1bbca4 Add VRRC select support
llvm-svn: 27543
2006-04-08 22:45:08 +00:00
Chris Lattner
6961fc76bb Codegen vector predicate compares.
llvm-svn: 27151
2006-03-26 10:06:40 +00:00
Chris Lattner
5d70a7c4a5 #include Intrinsics.h into all dag isels
llvm-svn: 27109
2006-03-25 06:47:10 +00:00
Chris Lattner
f2286d5917 Like the comment says, prefer to use the implicit add done by [r+r] addressing
modes than emitting an explicit add and using a base of r0.  This implements
Regression/CodeGen/PowerPC/mem-rr-addr-mode.ll

llvm-svn: 27068
2006-03-24 17:58:06 +00:00
Chris Lattner
77373d1bea Add support for "ri" addressing modes where the immediate is a 14-bit field
which is shifted left two bits before use.  Instructions like STD use this
addressing mode.

llvm-svn: 26942
2006-03-22 05:26:03 +00:00
Chris Lattner
bda7310ef7 With Evan's latest tblgen patch, this code is obsolete, thanks Evan!
llvm-svn: 26917
2006-03-21 06:37:40 +00:00
Chris Lattner
c8b16d00b9 Handle constant addresses more efficiently, folding the low bits into the
disp field of the load/store if possible.  This compiles
CodeGen/PowerPC/load-constant-addr.ll to:

_test:
        lis r2, 2838
        lfs f1, 26848(r2)
        blr

instead of:

_test:
        lis r2, 2838
        ori r2, r2, 26848
        lfs f1, 0(r2)
        blr

llvm-svn: 26908
2006-03-20 22:38:22 +00:00
Chris Lattner
eda030da04 reenable this hack, the tblgen version isn't quite ready
llvm-svn: 26902
2006-03-20 17:54:43 +00:00
Evan Cheng
89f3cff0f5 Use tblgen'd VECTOR_SHUFFLE selection code.
llvm-svn: 26900
2006-03-20 08:14:16 +00:00
Chris Lattner
a9a1313386 Add support for generating vspltw, instead of a vperm instruction with a
constant pool load.  This generates significantly nicer code for splats.

When tblgen gets bugfixed, we can remove the custom selection code.

llvm-svn: 26898
2006-03-20 06:51:10 +00:00
Nate Begeman
bb01d4f272 Remove BRTWOWAY*
Make the PPC backend not dependent on BRTWOWAY_CC and make the branch
selector smarter about the code it generates, fixing a case in the
readme.

llvm-svn: 26814
2006-03-17 01:40:33 +00:00
Chris Lattner
1678a6c477 Save/restore VRSAVE once per function, not once per block.
llvm-svn: 26793
2006-03-16 18:25:23 +00:00
Chris Lattner
ab1ed2aa96 Fix an off by one error that caused PPC LLC failures last night.
llvm-svn: 26758
2006-03-14 17:56:49 +00:00
Evan Cheng
2dd2c652b2 Added getTargetLowering() to TargetMachine. Refactored targets to support this.
llvm-svn: 26742
2006-03-13 23:20:37 +00:00
Chris Lattner
02e2c18c9c For functions that use vector registers, save VRSAVE, mark used
registers, and update it on entry to each function, then restore it on exit.

This compiles:

void func(vfloat *a, vfloat *b, vfloat *c) {
        *a = *b * *c + *c;
}

to this:

_func:
        mfspr r2, 256
        oris r6, r2, 49152
        mtspr 256, r6
        lvx v0, 0, r5
        lvx v1, 0, r4
        vmaddfp v0, v1, v0, v0
        stvx v0, 0, r3
        mtspr 256, r2
        blr

GCC produces this (which has additional stack accesses):

_func:
        mfspr r0,256
        stw r0,-4(r1)
        oris r0,r0,0xc000
        mtspr 256,r0
        lvx v0,0,r5
        lvx v1,0,r4
        lwz r12,-4(r1)
        vmaddfp v0,v0,v1,v0
        stvx v0,0,r3
        mtspr 256,r12
        blr

llvm-svn: 26733
2006-03-13 21:52:10 +00:00
Chris Lattner
51348c5f27 Several big changes:
1. Use flags on the instructions in the .td file to indicate the PPC970 unit
   type instead of a table in the .cpp file.  Much cleaner.
2. Change the hazard recognizer to build d-groups according to the actual
   algorithm used, not my flawed understanding of it.
3. Model "must be in the first slot" and "must be the only instr in a group"
   accurately.

llvm-svn: 26719
2006-03-12 09:13:49 +00:00
Chris Lattner
543832d39d Change the interface for getting a target HazardRecognizer to be more clean.
llvm-svn: 26608
2006-03-08 04:25:59 +00:00
Chris Lattner
2cab13573c Implement a very very simple hazard recognizer for LSU rejects and ctr set/read
flushes

llvm-svn: 26587
2006-03-07 06:32:48 +00:00
Chris Lattner
60a60f4b1e Implement CodeGen/PowerPC/or-addressing-mode.ll, which is also PR668.
llvm-svn: 26450
2006-03-01 07:14:48 +00:00
Chris Lattner
a1ec1ddd59 Implement selection of inline asm memory operands
llvm-svn: 26348
2006-02-24 02:13:12 +00:00
Nate Begeman
5965bd19f8 kill ADD_PARTS & SUB_PARTS and replace them with fancy new ADDC, ADDE, SUBC
and SUBE nodes that actually expose what's going on and allow for
significant simplifications in the targets.

llvm-svn: 26255
2006-02-17 05:43:56 +00:00