Yes sure, for my test data I used is the new tsukuba stereo dataset (provides you with 1800 stereo images pairs, depth maps etc.):
I used the OrganisedConversion<>::convert() to create point clouds out the depth and colour images, and then voxel grid down sample them.
The point size is: 640 x 480 = 307200 points per point cloud.
I was able to get converting depth+colour images to point clouds -> apply pass through filter -> voxel down-sample (leaf size 0.03f, 0.03f, 0.03f) the point cloud and rendering it all to screen at average of 25fps, while before optimising the code I could only get 0.4fps.
I'll try and get the rest of the code for other modules uploaded here first for you to double check before I contribute to the repository as I am not good with github and such.