Efficient Rate Conversion Filtering on GPUs with Shared Memory Access Pattern Scrambling

Mrugesh Gajjar Siemens Medical Solutions, Ultrasound Business Unit Mountain View, USA [email protected]

Abstract—Sampling rate conversion is a fundamental operation arising in signal processing applications. When rate conversion is implemented along with convolution filtering on GPGPU architectures, a typical optimized implementation results in performance loss for specific sampling factors due to interaction with the shared memory bank architecture of the GPU. We investigate the problem and find that it is possible to devise generic algorithmic modifications so as to avoid the shared memory bank conflicts causing the performance loss. Specifically, we propose two techniques to perform the convolution operation in a scrambled manner on adjacent threads, often increasing the computational overhead yet mitigating the significant performance losses due to bank conflicts. Using the examples of 2D image and 1D audio signals for 1D downsampling by an integer factor, we evaluate our techniques across five GPGPU architectures and show improved and consistent rate conversion performance across all downsampling factors resulting in speedups up to 4.15. Keywords-Rate conversion filtering, Dot product, Image resampling, Shared memory, GPGPU

I. I NTRODUCTION With the advent of SIMT (Single Instruction Multiple Thread) languages such as CUDA [1], [2] and OpenCL [3], Graphics Processing Units (GPU) are increasingly applied for general purpose computation. GPUs are finding their way into mainstream products in industry as they are applied to solve data and thread parallel problems in a performance and power efficient manner. Specifically, GPUs are used to speed up computations in domains such as scientific computing, computer vision, image processing, audio and speech signal processing and medical imaging. In this work, we explore GPU performance for image and audio downsampling [4] computation in one dimension (1D). We first outline a typical implementation of 1D downsampling by an integer factor. The downsampling computation is typically implemented along with convolution filtering to avoid aliasing. We measure and analyze its performance on Intel HD4000, AMD W5100 and three NVIDIA GPUS from each of their architectures viz, Fermi [5], Kepler [6] and Maxwell [7]. We observe that performance degrades drastically for certain even downsampling factors across all GPUs. This occurs because of shared memory bank conflicts when multiple threads in a warp [1] read equally spaced locations and the spacing is dependent on the downsampling factor itself. Using the fact that the convolution (dot product) can be summed in any order, we propose algorithms to perform this dot product in a scrambled manner across threads avoiding the bank conflicts. Specifically, we propose two scrambling schemes namely Modulo (%) Scrambling and Permutation Scrambling. In both, we are introducing additional computation in the convolution loop to compute the scrambling pattern. Hence, our GPU algorithms are tailored towards certain architecture specific performance considerations such as, time consuming modulo (%) operation, slower

Ismayil Guracar Siemens Medical Solutions, Ultrasound Business Unit Mountain View, USA [email protected]

Input image

Filtering range for Output Pixel 1 1

2

3

Filtering range for Output Pixel 2 Overlap

Figure 1. filter

Illustration of

1 3

downsampling in the X direction with a 7 tap

divergent constant memory accesses within the warp, and costly branch execution. For example, simpler access pattern striding schemes (Modulo (%) Scrambling) that delivered performance improvements on Intel and AMD GPU architectures, resulted in performance loss on NVIDIA architectures because avoiding bank conflicts in shared memory, resulted in slowdowns in constant memory reads for filter coefficients. Then, we developed a technique (Permutation Scrambling) that avoids shared memory bank conflicts while maintaining the constant memory read performance. This leads to increased code and computation in the convolution loop but delivers significant overall speedups up to 4.15. Memory bank conflicts is a well researched problem in vector processors and supercomputers [8]. The focus has been on design of hardware [9] and software based data layout and storage schemes [10], [11] to avoid bank conflicts. The problem received renewed attention with the introduction of programmer managed GPU shared memory with banked structure [12]. A new GPU pipeline design along with a bank conflict aware warp-scheduling scheme was proposed in [13]. A GPU based efficient implementation of arbitrarily resampling polyphase filters was discussed in [14], but they do not discuss bank conflicts as they focus on throughput for multiple channels. In this work, we do not we modify the data layout of our input and output arrays but we exploit the computational structure of the convolution filtering operation to modify the algorithm and avoid the bank conflicts. In Figure 1, we show filtering along the X dimension in an image with 7 filter coefficients and downsampling by factor 13 . Without loss of generality we assume image pixels and filter coefficients are floating point values. We see that each output pixel value is a dot product between 7 filter coefficients and the input pixels in the filtering range. As the number of filter coefficients are larger

Algorithm 1 DOWNSAMPLE KERNEL {T is number of filter taps} {DF is downsampling factor} index ← local thread id × DF sum ← 0 for k = 0 to T − 1 do sum ← sum + sh input[k + index] × c[k] end for output[global thread id] ← sum

bandwidth if the DF is relatively prime with the number of shared memory banks. Figure 2 shows the bank conflicts resulting when downsampling factor 8 interacts with 32 banked shared memory, reducing shared memory bandwidth by the common factor 8.

0 1 2

Shared memory banks

31

k=0

than the downsampling factor (DF), the filtering ranges for adjacent output pixels overlap. Below we outline a typical implementation of this algorithm on a GPGPU based system. We use CUDA terminology but it is equally applicable to OpenCL.

Shared Memory Bank conflicts

A. Image downsampling in 1D: a typical implementation on GPU In a typical implementation each thread will compute one output pixel and hence will perform the dot product (convolution sum) over the filtering range as illustrated in Figure 1. It is convenient to use 1D blocks of size BLOCK SIZE along the X dimension. Let OUTXSIZE and YSIZE be the width and the height of the output image. Then the grid size will be (OUTXSIZE/BLOCK SIZE, YSIZE). As adjacent threads read overlapping regions of input, we can load the part of the row required by all threads in a block to the shared memory followed by a syncthreads() operation. Then all the threads in block can read from the shared memory while computing the dot product. Hence, we avoid high latency reads from the device memory. Algorithm 1 shows an outline of such a downsampling kernel. Here local thread id is a thread id within a block and global thread id is a unique thread id calculated from block id and grid id. For brevity, we have not shown global memory reads of input data into shared memory array (sh input) and the syncthreads() operation.

Figure 2. Shared memory accesses in a warp with NB=32 and DF=8 for Algorithm 1

Thread i starts summing with index i in the dot product Effective Stride=9

Shared memory banks

k=0

II. T HE SHARED MEMORY BANK CONFLICT PROBLEM AND ACCESS PATTERN SCRAMBLING

With the kernel in Algorithm 1 all threads in a warp will execute the body of the loop in lockstep manner. We see that each thread loads a value for input from shared memory and a value for filter coefficient from (possibly) constant memory. Here the adjacent threads do not load consecutive values from shared memory. The shared memory is accessed in strided manner with stride equal to the downsampling factor. The shared memory in GPUs are organized in banks to increase the effective bandwidth. However, it is necessary to access these banks in coalesced manner, otherwise there will be bank conflicts. The number of coalescing patterns supported by shared memory is very limited compared to more flexible patterns allowed by the global memory. Typically, if all threads access the same location from the shared memory it results in a fast access due to a broadcast. E.g., assuming that the constant memory has similar organization to shared memory or the constant memory is one part of the shared memory, the reading of filter coefficient will result in fast access as all threads are reading same location at each instruction. However, for the sh input array, the accesses will lead to bank conflicts if the number of shared memory banks shares a common factor with the DF. The bandwidth utilization will be reduced by the common factor. In other words, the code in Algorithm 1 can only utilize full shared memory

Figure 3. Shared memory accesses in a warp with NB=32 and DF=8 for Algorithm 2

Algorithm 2 SCRAMBLING MODULO KERNEL {T is number of filter taps} {NB is number of shared memory banks} {DF is downsampling factor} index ← local thread id × DF sum ← 0 for k = 0 to T − 1 do strided k ← (k + local thread id)%T sum ← sum + sh input[strided k + index] × c[strided k] end for output[global thread id] ← sum

A. Modulo (%) Scrambling We can solve this problem by implementing the dot product in a strided manner across threads as shown in Algorithm 2. The dot product is essentially a weighted sum, where the order of performing the sum over elements of array does not change the

result (as summation is commutative and associative operation). We can exploit this property by starting the dot product loop in different threads at different array indices. E.g., if the number of shared memory banks is 8 and the DF is also 8, then all accesses to sh input will result in bank conflict on a single bank. But if the i’th thread starts computing the dot product at the i’th element, then accesses to shared memory become perfectly coalesced and there is no bank conflict. Another way to look at this idea is that it works as if we made the DF 9, by adding the thread id to the array element index. As 9 is relatively prime to 8, we don’t have any bank conflicts as seen in Figure 3. Any odd number is relatively prime to a power of two and all even numbers are relatively non-prime to a power of two. Therefore, all even DFs will result in bank conflicts in Algorithm 1. Algorithm 2 works as if we have incremented DF by 1; hence making it an odd number. Thus Algorithm 2 will work without any bank conflict for number of banks which is a power of 2 and DF even. In practice, all the GPUs have NB which is a power of two; hence the above solution will suffice. Algorithm 3 SCRAMBLING AVOID MODULO KERNEL {Avoid % operator within loop} {T is number of filter taps} index ← local thread id × DF pre offset ← local thread id%T sum ← 0 for k = 0 to T − 1 do strided k ← k + pre offset if strided k ≥ T then strided k ← strided k − T end if sum ← sum + sh input[strided k + index] × c[strided k] end for output[global thread id] ← sum 1) Avoiding modulo operator (%) within the loop: On some GPUs where a special function unit is not available, the modulo (%) operator can be a time consuming operation. In Algorithm 2, the dot product loop contains a potentially costly % T operation. We can move the % operation outside the loop as shown in Algorithm 3. However, by doing so we have introduced an IF condition inside the convolution loop and some GPU architectures may not have efficient predicated execution. Algorithm 4 shows an equivalent implementation avoiding the IF condition in the loop. B. Permutation Scrambling: efficient implementation with constant memory for filter coefficients As demonstrated in earlier sections, in Algorithm 1, the shared memory array accesses can lead to bank conflicts, but the accesses to coefficient array c do not lead to bank conflicts. In typical hardware implementations of GPU constant memory, the access is fastest if all the threads access an identical location. This is because a value can be broadcast to all the threads. However, while avoiding the shared memory bank conflicts as shown in Algorithm 2, we have changed the access pattern of the coefficients. This access pattern is still coalesced as adjacent threads access adjacent coefficients. However, for some GPUs such as NVIDIA [1],

Algorithm 4 SCRAMBLING AVOID MODULO IF KERNEL {Avoid IF statement in the loop} {T is number of filter taps} index ← local thread id × DF pre offset ← local thread id%T sum ← 0 for k = 0 to T − pre offset − 1 do strided k ← k + pre offset sum ← sum + sh input[strided k + index] × c[strided k] end for for k = T − pre offset to T do strided k ← k + pre offset − T sum ← sum + sh input[strided k + index] × c[strided k] end for output[global thread id] ← sum

0

Time steps

7

0

Time steps

7

Threads 0 .. 7

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 7

Threads 8 .. 15

0 1 2 3 4 5 6 7

2 3 0 1 6 7 4 5

Threads 16 .. 23

0 1 2 3 4 5 6 7

4 5 6 7 0 1 2 3

Threads 24 .. 31

0 1 2 3 4 5 6 7

6 7 4 5 2 3 0 1

Original

Permutation scrambling

Figure 4. An example of shared memory bank locations accessed within a warp as execution progresses

constant memory accesses are faster when all threads access the same location. On such GPUs, this change in constant memory access pattern, may nullify the advantage gained due to avoiding shared memory bank conflicts. In Algorithm 5, we show a specific solution to this problem, where we also avoid the % operator simultaneously. This solution is specifically for NVIDIA architectures where a group of 32 threads (warp) are executed simultaneously as one instruction stream. The algorithm works as follows. First the filtering loop is manually unrolled 8 times and the coefficients are loaded sequentially so all threads access the same address in constant memory. Then, while loading the input from shared memory we apply strided access patterns. For this, 32 threads are divided into 4 groups using their thread id. Each group will have its own pattern for loading input elements. This is achieved via index calculation using the XOR operation with the group id. At the end, while calculating the sum, we select the appropriate partial sum corresponding to the group id. As the threads in a warp are divided here among 4 groups and each group accesses shared memory with its own scrambled access pattern (Figure 4), we improve the shared memory bandwidth by a factor of 4. III. E XPERIMENTAL RESULTS We evaluated performance of three different downsampling tasks. (i) Image downsampling in the X direction with fixed output XSIZE. In this case we vary the input XSIZE in steps of 16 and automatically determine the required rational downsampling factor (DF) to maintain the output XSIZE to ∼320. Here the number of downsampling filter taps T = 4 × DF + 1. (ii) Fixed size

image (4096×256) where each sample is a float4 vector. A float4 vector can represent a velocity field components (magnitude and direction in 3D). We downsample this image in the X direction while varying downsampling factors from 2 to 16. Here, T = 4 × DF + 1. (iii) Fixed size audio (4M samples) where each sample is a float. We downsample this audio signal while varying downsampling factors from 2 to 16. Here, T = 10 × DF. We measure the kernel performance in mega samples per second with respect to the number of samples in the input image/audio. Each result is an average of 200 kernel executions. A. Modulo (%) Scrambling We present results of Modulo (%) Scrambling on Intel HD4000 integrated GPU and AMD FirePro W5100 GPU. For Modulo (%) Scrambling, we have increasingly efficient three versions viz., Algorithm 2, Algorithm 3 and Algorithm 4. Algorithm 4 avoids both % operator and IF condition in the convolution loop and results in

250

Performance in MSamples/sec

Original % Scrambling (Algorithm 4) 200

150 DF=1/6 100

DF=1/4

DF=1/8

50 _ _2 __ 4 4 1 _ __ 42 _ __ 41 _ _2 _ __ 1 2 _ __ 1 2 1 _ 9 5 11 3 13 7 15 4 9 5 11 6 13 7 0 600 900 1200 1500 1800 2100

__ 2 1 _ 15 8

_1 9

__ 1 10

2400

2700

3000

3300

Input XSIZE

Figure 5. Image downsampling by rational factors on Intel HD4000 integrated GPU, Output XSIZE ∼320 3000

Performance in MSamples/sec

Original % Scrambling (Algorithm 4) 2500

2000

1500

DF=1/6 DF=1/4

1000 DF=1/8 500 _ _2 __ 4 4 1 _ __ 42 _ __ 41 _ _2 _ __ 1 2 _ __ 1 2 1 _ 9 5 11 3 137 154 9 5 11 6 13 7 0 600 900 1200 1500 1800 2100

__ 2 1 _ 15 8

_1 9

__ 1 10

2400

2700

3000

3300

Input XSIZE

Figure 6. Image downsampling by rational factors on AMD W5100 GPU, Output XSIZE ∼320

maximum performance improvements. We have implemented our GPU programs using OpenCL for Intel and AMD platforms. 1) Intel HD4000 integrated GPU: Intel HD4000 integrated GPU is part of Intel Core i7 3770 CPU (Ivy Bridge). The HD4000 provides maximum of 25.6 GBPS device memory bandwidth and 294 single precision GFlop/s. It has 16 execution units each 1600 Original Permutation scrambling

1400 Performance in MSamples/sec

Algorithm 5 PERMUTATION SCRAMBLING KERNEL {T is number of filter taps} {& is bitwise AND, ≫ is bitwise right shift, ⊕ is bitwise XOR and = is comparison operator yielding 1 for true and 0 for false} iweight ← (local thread id&18H) ≫ 2 fweight0 ← 1.0 × (iweight = 0) fweight1 ← 1.0 × (iweight = 2) fweight2 ← 1.0 × (iweight = 4) fweight3 ← 1.0 × (iweight = 6) sum ← 0 for k = 0 to T − 1 in steps of 8 do C0 ← c[k] C1 ← c[k + 1] C2 ← c[k + 2] C3 ← c[k + 3] C4 ← c[k + 4] C5 ← c[k + 5] C6 ← c[k + 6] C7 ← c[k + 7] index ← local thread id × DF n ← k + index X0 ← sh input[n + (0 ⊕ iweight)] X1 ← sh input[n + (1 ⊕ iweight)] X2 ← sh input[n + (2 ⊕ iweight)] X3 ← sh input[n + (3 ⊕ iweight)] X4 ← sh input[n + (4 ⊕ iweight)] X5 ← sh input[n + (5 ⊕ iweight)] X6 ← sh input[n + (6 ⊕ iweight)] X7 ← sh input[n + (7 ⊕ iweight)] sum ← sum+ (C0 × X0 + C1 × X1 + C2 × X2 + C3 × X3+ C4 × X4 + C5 × X5 + C6 × X6 + C7 × X7) × fweight0+ (C0 × X2 + C1 × X3 + C2 × X0 + C3 × X1+ C4 × X6 + C5 × X7 + C6 × X4 + C7 × X5) × fweight1+ (C0 × X4 + C1 × X5 + C2 × X6 + C3 × X7+ C4 × X0 + C5 × X1 + C6 × X2 + C7 × X3) × fweight2+ (C0 × X6 + C1 × X7 + C2 × X4 + C3 × X5+ C4 × X2 + C5 × X3 + C6 × X0 + C7 × X1) × fweight3 end for output[global thread id] ← sum

1200 1000 800 600 400 200 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Downsampling Factor

Figure 7. Performance of Permutation Scrambling on NVIDIA Quadro 4000 (Fermi) on 4096 x 256 image of float4 samples

B. Permutation Scrambling The Modulo (%) Scrambling results in slowdowns in NVIDIA GPUs due to divergent constant memory accesses and the costly % operator. For brevity, we have omitted those results. For NVIDIA GPUs we report results using Permutation Scrambling techniques on tasks (ii) and (iii). 5000 Original Permutation scrambling

Performance in MSamples/sec

4500 4000 3500 3000 2500 2000 1500 1000 500 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Downsampling Factor

Figure 8. Performance of Permutation Scrambling on NVIDIA Quadro 4000 (Fermi) on 4194304 float audio samples

Performance in MSamples/sec

1400 Original Permutation scrambling

1200 1000 800 600

3500

Performance in MSamples/sec

supporting 8 hardware threads. Figure 5 shows the performance measurement of task (i) without scrambling and with scrambling using Algorithm 4. For original version we see three distinct regions of performance drop (for Input XSIZE near 1200, 1800 and 2400) that correspond to DFs 41 , 61 and 18 respectively. As expected, we see the performance drops the most with DF = 81 , followed by 41 and 16 , as they share the common factors 8, 4 and 2 respectively with the number of banks that is a power of two. With scrambling, we effectively bridge those performance gaps. 2) AMD FirePro W5100 GPU: AMD W5100 is a workstation graphics card with 768 stream processors (hardware threads) in 12 compute units that provides 96 GBPS of device memory bandwidth and 1.43 TFlop/s compute capacity. Similarly to the HD4000 GPU, in Figure 6 we see three regions of performance improvement corresponding to downsampling factors 41 , 61 and 81 .

Original Permutation scrambling

3000 2500 2000 1500 1000 500 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Downsampling Factor

Figure 10. Performance of Permutation Scrambling on NVIDIA Quadro K2000 (Kepler) on 4194304 float audio samples

1) NVIDIA Quadro 4000 (Fermi) GPU: NVIDIA Quadro 4000 is a workstation GPU of Fermi architecture delivering 89.6 GBPS device memory bandwidth, with 256 CUDA cores (hardware threads) and total 486 GFlop/s single precision performance. Figure 7 and 8 show performance of Permutation Scrambling on tasks (ii) and (iii) respectively. Table I also summarizes speedups for various downsampling factors. We see that for the image dataset (task (ii)) the Fermi GPU performance improves for all even downsampling factors (with speedups ranging from 1.18x to 2.51x). For the audio dataset the performance only improves for DFs that share higher factors with the number of banks (which is 32). According to our analysis earlier, bandwidth loss factor due to bank conflicts is same as the common factor between DF and the number of banks. Therefore, we expect to get better performance improvement for such DFs. This pattern is common across all three architectures and both the tasks as seen in Table I. For Fermi, for DFs that share a factor of 2, the computational overhead of Permutation Scrambling becomes greater than the performance gained by the bank conflict free operation and we see some slowdowns for DF = 2, 6, 10. 2) NVIDIA Quadro K2000 (Kepler) GPU: NVIDIA Quadro K2000 is a workstation GPU of Kepler architecture having 384 CUDA cores with maximum memory bandwidth of 96 GBPS. Figure 9 and 10 show the performance of Permutation Scrambling on tasks (ii) and (iii) respectively. Permutation Scrambling leads to performance improvement for all even downsampling factors (with speedups ranging from 1.10x to 2.66x). For task (ii), as the sample size is float4 and as the DF increases performance decreases as the kernel becomes compute bound. In that case we can also see that the performance decrease for even downsampling factors is not as pronounced compared to the task (iii) where each sample is a float.

400 200 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Downsampling Factor

Figure 9. Performance of Permutation Scrambling on NVIDIA Quadro K2000 (Kepler) on 4096 x 256 image of float4 samples

3) NVIDIA Quadro K2200 (Maxwell) GPU: NVIDIA Quadro K2200 is a workstation GPU of Maxwell architecture having 640 CUDA cores with maximum memory bandwidth of 80 GBPS. Figure 11 and 12 show performance of Permutation Scrambling on tasks (ii) and (iii) respectively. Permutation Scrambling leads to performance improvement for all even downsampling factors except DF=14 for task (ii). The speedups range from 1.01x to 4.15x. Similar to Kepler, we observe the performance for task (ii)

DF

2 4 6 8 10 12 14 16

4096 × 256 image, float4 Quadro Quadro Quadro 4000 K2000 K2200 (Fermi) (Kepler) (Maxwell) 1.30 1.10 1.02 2.15 1.40 1.54 1.32 1.28 1.08 2.51 1.68 1.65 1.21 1.22 1.01 1.53 2.10 1.22 1.18 1.30 0.97 2.00 2.63 1.25

4194304 × 1 audio, float Quadro Quadro Quadro 4000 K2000 K2200 (Fermi) (Kepler) (Maxwell) 0.84 1.35 1.11 1.07 1.79 1.53 0.90 1.48 1.19 1.77 2.25 2.73 0.92 1.30 1.27 1.22 1.89 1.68 1.01 1.27 1.18 2.95 2.66 4.15

Table I S PEEDUP FACTORS AND DF ON NVIDIA GPU S WITH P ERMUTATION S CRAMBLING

IV. C ONCLUSION We explore the shared memory bank conflict problem arising in typically optimized rate conversion filtering implementations on GPUs. We find that if the number of banks shares a factor with the downsampling factor, there will be bank conflicts. This is the case for all even downsampling factors and for bank counts which are powers of two. To avoid this, we compute the convolution sum for different threads in a warp in scrambled orders, hence scrambling the shared memory access patterns and utilizing the full memory bandwidth. Our techniques demonstrate that it is possible to exploit the computational structure of the problem to result in significant performance gains on broad range of GPU architectures. R EFERENCES

decrease as DF increases but it is not as pronounced as Kepler. However, for Maxwell task (ii) speedups are lower compared to Fermi and Kepler both. On both Fermi and Maxwell architectures we see speedups peak at DF = 8 instead of 16. This may be because of the fact that when adjacent threads read in units of float4 it works as if the downsampling factor has been multiplied by 4 as the accesses will spread 4 times wider.

[2] N. Wilt, The CUDA Handbook: A Comprehensive Guide to GPU Programming. Pearson Education, 2013. [3] J. E. Stone, D. Gohara, and G. Shi, “Opencl: A parallel programming standard for heterogeneous computing systems,” IEEE Des. Test, vol. 12, no. 3, pp. 66–73, May 2010. [4] J. Proakis and D. Manolakis, Digital Signal Processing. Pearson Prentice Hall, 2007.

3000 Original Permutation scrambling Performance in MSamples/sec

[1] CUDA C Programming Guide. NVIDIA Corporation, 2012.

2500

[5] NVIDIA’s next generation CUDA compute architecture: Fermi. NVIDIA Corporation, 2009.

2000

[6] NVIDIA’s next generation CUDA compute architecture: Kepler GK110/210. NVIDIA Corporation, 2014.

1500

[7] NVIDIA GeForce GTX 980 Featuring Maxwell, The Most Advanced GPU Ever Made. NVIDIA Corporation, 2014.

1000

500

16

[8] P. Budnik and D. J. Kuck, “The organization and use of parallel memories,” IEEE Transactions on Computers, vol. C-20, no. 12, pp. 1566–1569, Dec 1971.

Figure 11. Performance of Permutation Scrambling on NVIDIA Quadro K2200 (Maxwell) on 4096 x 256 image of float4 samples

[9] D. H. Lawrie and C. R. Vora, “The prime memory system for array access,” IEEE Trans. Comput., vol. 31, no. 5, pp. 435–442, May 1982.

0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Downsampling Factor

[10] S. Weiss, “An aperiodic storage scheme to reduce memory conflicts in vector processors,” in Computer Architecture, 1989. The 16th Annual International Symposium on, May 1989, pp. 380–386.

9000 Original Permutation scrambling

Performance in MSamples/sec

8000 7000

[11] D. T. Harper and D. A. Linebarger, “Conflict-free vector access using a dynamic storage scheme,” IEEE Transactions on Computers, vol. 40, no. 3, pp. 276–283, Mar 1991.

6000 5000 4000

[12] S. Gao, “Improving gpu shared memory access efficiency,” PhD diss., University of Tennessee, 2014.

3000 2000 1000 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Downsampling Factor

Figure 12. Performance of Permutation Scrambling on NVIDIA Quadro K2200 (Maxwell) on 4194304 float audio samples

[13] C. Gou and G. N. Gaydadjiev, “Elastic pipeline: Addressing gpu on-chip shared memory bank conflicts,” in Proceedings of the 8th ACM International Conference on Computing Frontiers, ser. CF ’11, 2011. [14] S. C. Kim, W. L. Plishker, and S. S. Bhattacharyya, “An efficient gpu implementation of an arbitrary resampling polyphase channelizer,” in Conference on Design and Architectures for Signal and Image Processing (DASIP), 2013.

Efficient Rate Conversion Filtering on GPUs with ...

audio signals for 1D downsampling by an integer factor, we evaluate ... are floating point values. We see that each ..... sample is a float. We downsample this audio signal while varying downsampling factors from 2 to 16. Here, T = 10 × DF. We measure the kernel performance in mega samples per second with respect to the ...

201KB Sizes 0 Downloads 460 Views

Recommend Documents

Efficient Computation of Sum-products on GPUs ...
Bayesian networks on an NVIDIA GeForce 8800GTX GPU over an optimized CPU version on a single core of a 2.4 GHz Intel Core. 2 processor with 4 MB L2 cache. For sufficiently large inputs the speedups reach 2700. The significant contributor to the speed

Efficient Parallel CKY Parsing on GPUs - Slav Petrov
of applications in various domains by executing a number of threads and thread blocks in paral- lel, which are specified by the programmer. Its popularity has ...

Plow & Hearth boosts traffic, conversion rate with PLAs ... - PMX Agency
that the Google search engine results page (SERP) would be impacted, but we just did ... closely with our digital marketing agency, PM Digital, to break out the.

Plow & Hearth boosts traffic, conversion rate with ... Services
categories,” explains Bobby Thompson, Plow & Hearth account manager. “This most recent optimization has garnered a 12 percent increase in conversion rate.” “ These metrics ... For more information, visit www.plowhearth.com. About PM Digital.

Waterfilters.net increased their conversion rate by 11% with Google ...
site, the company's previous search engine began running into problems. “Some of ... previous search provider was more expensive than Google search, and its results were not as good.” ... Service (SaaS) solution, you can setup. Google Site ...

Waterfilters.net increased their conversion rate by 11% with Google ...
site, the company's previous search engine began running into problems. ... The company has even integrated pages from applicable ... As a Software-as-a-.

Plow & Hearth boosts traffic, conversion rate with PLAs ... - PMX Agency
in 2012, the Plow & Hearth online marketing team expanded its ad strategy by adding Google ... Case Study | Google Shopping. At a Glance. Goals ... drives qualified traffic to its clients' websites to grow their business online. PM Digital's talent .

Plow & Hearth boosts traffic, conversion rate with PLAs ... - PMX Agency
Search, SEO, Social Commerce, Display Advertising, and Shopping Feed. Management. Through its customized, high-touch campaigns, PM Digital.

Tree Filtering: Efficient Structure-Preserving ... - Semantic Scholar
GRF grant from the Research Grants Council of Hong Kong under Grant U. 122212. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Sina Farsiu. L. Bao, Y. Song, and Q. Yang are with the Departmen

Client Success Story - Conversion Rate Optimization
User Experience Design &. Development. • Strategic Data Analysis. • Conversion-driven Redesign. • Personalization. PFS's. Full-Scope. Optimization. Services ...

Netsociety increases registration conversion rate for ...
varying from search engine optimization (SEO), Google AdWords (SEA), Google. Analytics, conversion ... network of conversion experts that can help you grow ...

Client Success Story - Conversion Rate Optimization
17% more revenue to an online retailer's holiday shop ... shoppers to a content page and out of the shopping cart and checkout funnel. Hypothesis: By keeping ...

GPUfs: Integrating a File System with GPUs
to be largely oblivious to where data is located—whether on disk, in main memory, in a .... comprising a kernel into a single global hardware queue. The hard-.

GPUfs: Integrating a File System with GPUs
parallel data analysis applications, prioritized image matching and string search, highlight the ... A design and implementation of a generic software-only buffer.

Accelerating SSL with GPUs
eavesdropping, and enables authentication of end hosts. Nowadays,. SSL plays an essential role in online-banking, e-commerce, and other Internet services to ...

Fast Multiplication in Binary Fields on GPUs via ...
Target applications: shared memory to cache input (e.g. stencil). • Our case: binary field multiplication. • Result: 50% speedup over baseline x138 over a single core CPU with Intel's CLMUL instruction. Page 3. ICS 2016. Mark Silberstein, Technio

Real-Time Particle-Based Simulation on GPUs - Semantic Scholar
†e-mail:[email protected]. ‡e-mail:[email protected]. §e-mail:[email protected] particles (spheres) as the title of this skech implies ...

Tree Filtering: Efficient Structure-Preserving Smoothing ... - IEEE Xplore
Dec 17, 2013 - Linchao Bao, Yibing Song, Qingxiong Yang, Member, IEEE, Hao Yuan, and Gang ... joint filtering, minimum spanning tree, structure-preserving.

Efficient privacy-preserving collaborative filtering based ...
Recently, more web-based services offered through cloud computing have only exacerbated the problem. User-tailored ...... Springer-Verlag, August 2000.

Tree Filtering: Efficient Structure-Preserving Smoothing ...
rected grid (planar graph) with nodes being all the image pixels and ... 2(c) is 5. The MST extracted from image has an important property .... where Fσ(x) is a falling off exponential function controlled ...... He is a recipient of the Best Student

Global energy conversion rate from geostrophic flows ...
Apr 26, 2011 - source for internal waves and the associated turbulent ... and multibeam soundings and LADCP velocity data in the ..... open ocean, J. Geophys. Res. ... mining the structure and response of the wind‐driven Southern Hemi-.

Hunt's found a 77 percent conversion rate during ... .com.au
Approach. Since Hunt's products are packaged foods, the company's online presence is designed to reinforce branding and create a positive customer experience. ... the level of support and spending appropriate for their business. For more ...

Lacoste increases its conversion rate by 29.7%
the pages enabled Lacoste to identify the most efficient conversion levers, whilst noticeably improving the site's global performance. Three goals were set: 1.

Absorbed dose rate conversion coefficients for ...
Dec 3, 2004 - photon and electron transport simulations using the Monte Carlo method. The presented .... has a higher biological effectiveness than electrons with energies above 10 keV (Straume and. Carsten 1993 .... Hence, the Monte Carlo model geom