Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
148 views
in Technique[技术] by (71.8m points)

c - Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators)

I'm a newbie at instruction optimization.

I did a simple analysis on a simple function dotp which is used to get the dot product of two float arrays.

The C code is as follows:

float dotp(               
    const float  x[],   
    const float  y[],     
    const short  n      
)
{
    short i;
    float suma;
    suma = 0.0f;

    for(i=0; i<n; i++) 
    {    
        suma += x[i] * y[i];
    } 
    return suma;
}

I use the test frame provided by Agner Fog on the web testp.

The arrays which are used in this case are aligned:

int n = 2048;
float* z2 = (float*)_mm_malloc(sizeof(float)*n, 64);
char *mem = (char*)_mm_malloc(1<<18,4096);
char *a = mem;
char *b = a+n*sizeof(float);
char *c = b+n*sizeof(float);

float *x = (float*)a;
float *y = (float*)b;
float *z = (float*)c;

Then I call the function dotp, n=2048, repeat=100000:

 for (i = 0; i < repeat; i++)
 {
     sum = dotp(x,y,n);
 }

I compile it with gcc 4.8.3, with the compile option -O3.

I compile this application on a computer which does not support FMA instructions, so you can see there are only SSE instructions.

The assembly code:

.L13:
        movss   xmm1, DWORD PTR [rdi+rax*4]  
        mulss   xmm1, DWORD PTR [rsi+rax*4]   
        add     rax, 1                       
        cmp     cx, ax
        addss   xmm0, xmm1
        jg      .L13

I do some analysis:

          μops-fused  la    0    1    2    3    4    5    6    7    
movss       1          3             0.5  0.5
mulss       1          5   0.5  0.5  0.5  0.5
add         1          1   0.25 0.25               0.25   0.25 
cmp         1          1   0.25 0.25               0.25   0.25
addss       1          3         1              
jg          1          1                                   1                                                   -----------------------------------------------------------------------------
total       6          5    1    2     1     1      0.5   1.5

After running, we get the result:

   Clock  |  Core cyc |  Instruct |   BrTaken | uop p0   | uop p1      
--------------------------------------------------------------------
542177906 |609942404  |1230100389 |205000027  |261069369 |205511063 
--------------------------------------------------------------------  
   2.64   |  2.97     | 6.00      |     1     | 1.27     |  1.00   

   uop p2   |    uop p3   |  uop p4 |    uop p5  |  uop p6    |  uop p7       
-----------------------------------------------------------------------   
 205185258  |  205188997  | 100833  |  245370353 |  313581694 |  844  
-----------------------------------------------------------------------          
    1.00    |   1.00      | 0.00    |   1.19     |  1.52      |  0.00           

The second line is the value read from the Intel registers; the third line is divided by the branch number, "BrTaken".

So we can see, in the loop there are 6 instructions, 7 uops, in agreement with the analysis.

The numbers of uops run in port0 port1 port 5 port6 are similar to what the analysis says. I think maybe the uops scheduler does this, it may try to balance loads on the ports, am I right?

I absolutely do not understand know why there are only about 3 cycles per loop. According to Agner's instruction table, the latency of instruction mulss is 5, and there are dependencies between the loops, so as far as I see it should take at least 5 cycles per loop.

Could anyone shed some insight?

==================================================================

I tried to write an optimized version of this function in nasm, unrolling the loop by a factor of 8 and using the vfmadd231ps instruction:

.L2:
    vmovaps         ymm1, [rdi+rax]             
    vfmadd231ps     ymm0, ymm1, [rsi+rax]       

    vmovaps         ymm2, [rdi+rax+32]          
    vfmadd231ps     ymm3, ymm2, [rsi+rax+32]    

    vmovaps         ymm4, [rdi+rax+64]          
    vfmadd231ps     ymm5, ymm4, [rsi+rax+64]    

    vmovaps         ymm6, [rdi+rax+96]          
    vfmadd231ps     ymm7, ymm6, [rsi+rax+96]   

    vmovaps         ymm8, [rdi+rax+128]         
    vfmadd231ps     ymm9, ymm8, [rsi+rax+128]  

    vmovaps         ymm10, [rdi+rax+160]               
    vfmadd231ps     ymm11, ymm10, [rsi+rax+160] 

    vmovaps         ymm12, [rdi+rax+192]                
    vfmadd231ps     ymm13, ymm12, [rsi+rax+192] 

    vmovaps         ymm14, [rdi+rax+224]                
    vfmadd231ps     ymm15, ymm14, [rsi+rax+224] 
    add             rax, 256                    
    jne             .L2

The result:

  Clock   | Core cyc |  Instruct  |  BrTaken  |  uop p0   |   uop p1  
------------------------------------------------------------------------
 24371315 |  27477805|   59400061 |   3200001 |  14679543 |  11011601  
------------------------------------------------------------------------
    7.62  |     8.59 |  18.56     |     1     | 4.59      |     3.44


   uop p2  | uop p3  |  uop p4  |   uop p5  |   uop p6   |  uop p7  
-------------------------------------------------------------------------
 25960380  |26000252 |  47      |  537      |   3301043  |  10          
------------------------------------------------------------------------------
    8.11   |8.13     |  0.00    |   0.00    |   1.03     |  0.00        

So we can see the L1 data cache reach 2*256bit/8.59, it is very near to the peak 2*256/8, the usage is about 93%, the FMA unit only used 8/8.59, the peak is 2*8/8, the usage is 47%.

So I think I've reached the L1D bottleneck as Peter Cordes expects.

==================================================================

Special thanks to Boann, fix so many grammatical errors in my question.

=================================================================

From Peter's reply, I get it that only "read and written" register would be the dependence, "writer-only" registers would not be the dependence.

So I try to reduce the registers used in loop, and I try to unrolling by 5, if everything is ok, I should meet the same bottleneck, L1D.

.L2:
    vmovaps         ymm0, [rdi+rax]    
    vfmadd231ps     ymm1, ymm0, [rsi+rax]    

    vmovaps         ymm0, [rdi+rax+32]    
    vfmadd231ps     ymm2, ymm0, [rsi+rax+32]   

    vmovaps         ymm0, [rdi+rax+64]    
    vfmadd231ps     ymm3, ymm0, [rsi+rax+64]   

    vmovaps         ymm0, [rdi+rax+96]    
    vfmadd231ps     ymm4, ymm0, [rsi+rax+96]   

    vmovaps         ymm0, [rdi+rax+128]    
    vfmadd231ps     ymm5, ymm0, [rsi+rax+128]   

    add             rax, 160                    ;n = n+32
    jne             .L2 

The result:

    Clock  | Core cyc  | Instruct  |  BrTaken |    uop p0  |   uop p1  
------------------------------------------------------------------------  
  25332590 |  28547345 |  63700051 |  5100001 |   14951738 |  10549694   
------------------------------------------------------------------------
    4.97   |  5.60     | 12.49     |    1     |     2.93   |    2.07    

    uop p2  |uop p3   | uop p4 | uop p5 |uop p6   |  uop p7 
------------------------------------------------------------------------------  
  25900132  |25900132 |   50   |  683   | 5400909 |     9  
-------------------------------------------------------------------------------     
    5.08    |5.08     |  0.00  |  0.00  |1.06     |     0.00    

We can see 5/5.60 = 89.45%, it is a little smaller than urolling by 8, is there something wrong?

=================================================================

I try to unroll loop by 6, 7 and 15, to see the result. I also unroll by 5 and 8 again, to double confirm the result.

The result is as follow, we can see this time the result is much better than before.

Although the result is not stable, the unrolling factor is bigger and the result is better.

            | L1D bandwidth     |  CodeMiss | L1D Miss | L2 Miss 
----------------------------------------------------------------------------
  unroll5   | 91.86% ~ 91.94%   |   3~33    | 272~888  | 17~223
--------------------------------------------------------------------------
  unroll6   | 92.93% ~ 93.00%   |   4~30    | 481~1432 | 26~213
--------------------------------------------------------------------------
  unroll7   | 92.29% ~ 92.65%   |   5~28    | 336~1736 | 14~257
--------------------------------------------------------------------------
  unroll8   | 95.10% ~ 97.68%   |   4~23    | 363~780  | 42~132
--------------------------------------------------------------------------
  unroll15  | 97.95% ~ 98.16%   |   5~28    | 651~1295 | 29~68

=====================================================================

I try to compile the function with gcc 7.1 in the web "https://gcc.godbolt.org"

The compile option is "-O3 -march=haswell -mtune=intel", that is similar to gcc 4.8.3.

.L3:
        vmovss  xmm1, DWORD PTR [rdi+rax]
        vfmadd231ss     xmm0, xmm1, DWORD PTR [rsi+rax]
        add     rax, 4
        cmp     rdx, rax
        jne     .L3
        ret
Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Related:


Look at your loop again: movss xmm1, src has no dependency on the old value of xmm1, because its destination is write-only. Each iteration's mulss is independent. Out-of-order execution can and does exploit that instruction-level parallelism, so you definitely don't bottleneck on mulss latency.

Optional reading: In computer architecture terms: register renaming avoids the WAR anti-dependency data hazard of reusing the same architectural register. (Some pipelining + dependency-tracking schemes before register renaming didn't solve all the problems, so the field of computer architecture makes a big deal out of different kinds of data hazards.

Register renaming with Tomasulo's algorithm makes everything go away except the actual true dependencies (read after write), so any instruction where the destination is not also a source register has no interaction with the dependency chain involving the old value of that register. (Except for false dependencies, like popcnt on Intel CPUs, and writing only part of a register without clearing the rest (like mov al, 5 or sqrtss xmm2, xmm1). Related: Why do x86-64 instructions on 32-bit registers zero the upper part of the full 64-bit register?).


Back to your code:

.L13:
    movss   xmm1, DWORD PTR [rdi+rax*4]  
    mulss   xmm1, DWORD PTR [rsi+rax*4]   
    add     rax, 1                       
    cmp     cx, ax
    addss   xmm0, xmm1
    jg      .L13

The loop-carried dependencies (from one iteration to the next) are each:

  • xmm0, read and written by addss xmm0, xmm1, which has 3 cycle latency on Haswell.
  • rax, read and written by add rax, 1. 1c latency, so it's not the critical-path.

It looks like you measured the execution time / cycle-count correctly, because the loop bottlenecks on the 3c addss latency.

This is expected: the serial dependency in a dot product is the addition into a single sum (aka the reduction), not the multiplies between vector elements. (Unrolling with multiple sum accumulator variables / registers can hide that latency.)

That is by far the dominant bottleneck for this loop, despite various minor inefficiencies:


short i produced the silly cmp cx, ax, which takes an extra operand-size prefix. Luckily, gcc managed to avoid actually doing add ax, 1, because signed-overflow is Undefined Behaviour in C. So the optimizer can assume it doesn't happen. (update: integer promotion rules make it different for short, so UB doesn't come into it, but gcc can still legally optimize. Pretty wacky stuff.)

If you'd compiled with -mtune=intel, or better, -march=haswell, gcc would have put the cmp and jg next to each other where they could macro-fuse.

I'm not sure why you have a * in your table on the cmp and add instructions. (update: I was purely guessing that you were using a notation like IACA does, but apparently you weren't). Neither of them fuse. The only fusion happening is micro-fusion of mulss xmm1, [rsi+rax*4].

And since it's a 2-operand ALU instruction with a read-modify-write destination register, it stays macro-fused even in the ROB on Haswell. (Sandybridge would un-laminate it at issue time.) Note that vmulss xmm1, xmm1, [rsi+rax*4] would un-laminate on Haswell, too.

None of this really matters, since you just totally bottleneck on FP-add latency, much slower than any uop-throughput limits. Without -ffast-math, there's nothing compilers can do. With -ffast-math, clang will usually unroll with multiple accumulators, and it will auto-vectorize so they will be vector accumulators. So you can probably saturate Haswell's throughput limit of 1 vector or scalar FP add per clock, if you hit in L1D cache.

With FMA being 5c latency and 0.5c throughput on Haswell, you would need 10 accumulators to keep 10 FMAs in flight and max out FMA throughput by keeping p0/p1 saturated with FMAs. (Skylake reduced FMA latency to 4 cycles, and runs multiply, add, and FMA on the FMA units. So it actually has higher add latency than Haswell.)

(You're bottlenecked on loads, because you need two loads for every FMA. In other cases, you can actually gain add throughput by replacing some a vaddps instruction with an FMA with a multiplier of 1.0. This means more latency to hide, so it's best in a more complex algorithm where you have an add that's not on the critical path in the first place.)


Re: uops per port:

there are 1.19 uops per loop in the port 5, it is much more than expect 0.5, is it the matter about the uops dispatcher trying to make uops on every port same

Yes, something like that.

The uops are not assigned randomly, or somehow evenly distributed across every port they could run on. You assumed that the add and cmp uops would distribute evenly across p0156, but that's not the case.

The issue stage assigns uops to ports based on how many uops are already waiting for that port. Since addss can only run on p1 (and it's the loop bottleneck), there are usually a lot of p1 uops issued but not executed. So few other uops will ever be scheduled to port1. (This includes mulss: most of the mulss uops will end up scheduled to port 0.)

Taken-branches can only run on port 6. Port 5 doesn't have any uops in this loop that can only run there, so it ends up attracting a lot of the many-port uops.

The scheduler (which picks unfused-domain uops out of the Reservation Station) isn't smart enough to run critical-path-first, so this is assignment algorithm reduces resource-conflict latency (other uops stealing port1 on cycles when an addss could have run). It's also useful in cases where you bottleneck on the throughput of a given port.

Scheduling of already-assigned uops is normally oldest-ready first, as I understand it. This simple algorithm is hardly surprising, since it has to pick a uop with its inputs ready for each port from a 60-entry RS every clock cycle, without melting your CPU. The out-of-order machinery that finds and exploits the ILP is one of the significant power costs in a modern CPU, comparable to the execution units that do the actual work.

Related / more details: How are x86 uops scheduled, exactly?


More performance analysis stuff:

Other than cache misses / branch mispredicts, the three main possible bottlenecks for CPU-bound loops are:

  • dependency chains (like in this case)
  • front-end throughput (max of 4 fused-domain uops issued per clock on Haswell)
  • execution port bottlenecks, like if lots of uops need p0/p1, or p2/p3, like in your unrolled loop. Count unfused-domain uops for specific ports. Generally you can assuming best-case distribution, with uops that can run on other ports not stealing the busy ports very often, but it does happen some.

A loop body or short block of code can be approximately characterized by 3 things: fused-domain uop count, unfused-domain count of which execution units it can run on, and total critical-path latency assuming best-case scheduling for its critical path. (Or latencies from each of input A/B/C to the output...)

For example of doing all three to compare a few short sequences, see my answer on What is the efficient way to count set bits at a position or lower?

For short loops, modern CPUs have enough out-of-order execution resources (physical register file size so renaming doesn't run out of registers, ROB size) to have enough iterations of a loop in-flight to find all the parallelism. But as dependency chains within loops get longer, eventually they run out. See Measuring Reorder Buffer Capacity for some details on what happens when a CPU runs out of registers to rename onto.

See also lots of performance and reference links in the tag wiki.


Tuning your FMA loop:

Yes, dot-product on Haswell will bottleneck on L1D throughput at only half the throughput of the FMA units, since it takes two loads per multiply+add.

If you were doing B[i] = x * A[i] + y; or sum(A[i]^2), you could saturate FMA throughput.

It looks like you're still trying to avoid register reuse even in write-only cases like the destination of a vmovaps load, so you ran out of registers after unrolling by 8. That's fine, but could matter for other cases.

Also, using ymm8-15 can slightly increase code-size if it means a 3-byte VEX prefix is needed instead of 2-byte. Fun fact: vpxor ymm7,ymm7,ymm8 needs a 3-byte VEX while vpxor ymm8,ymm8,ymm7 only needs a 2-byte VEX pr


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...