Hi, my name is Rodolfo Schulz de Lima.
This site under heavy maintenance, it'll will be wordpress based, but currently I don't have time to work on this.Here's a short bio:
My curriculum vitae can be found here
Rodolfo Schulz de Lima is a software analyst and developer with 22 years of experience in C/C++ programming (15 years professionally). He has been involved in several projects, covering a broad range of applications, from vehicle surveillance systems to 3D panorama processing and visualization, with several database applications in between.
He is graduated from Rio de Janeiro's Federal University (UFRJ) with a degree in Electronics Engineering. He works at Digitok as a software analyst and main programmer, and is collaborator at Visgraf laboratory at IMPA, His areas of interest are computer graphics, operating systems, compilers, object-oriented and generic programming.
Lima, R. S.;
ACM Transactions on Graphics
(Proceedings of the ACM SIGGRAPH Asia 2011), Hong Kong, December 2011, 30(6)
Abstract: Image processing operations like blurring, inverse convolution, and summed-area tables are often computed efficiently as a sequence of 1D recursive filters. While much research has explored parallel recursive filtering, prior techniques do not consider optimization across the entire filter sequence. Typically, a separate filter (or often a causal-anticausal filter pair) is required in each dimension. Computing these filter passes independently results in significant traffic to global memory, a crucial bottleneck in GPU systems. We present a new algorithmic framework for parallel evaluation. It partitions the image into 2D blocks, with a small band of data buffered along each block perimeter. We show that these perimeter bands are sufficient to accumulate the effects of the successive filters. A remarkable result is that the image data is read only twice and written just once, independent of image size, and thus total memory bandwidth is reduced even compared to the traditional serial algorithm. We demonstrate significant speedups in GPU computation.