Texture mapping

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Texture mapping

raph
Hello all,

I've been writting some functions for texture mapping on meshes, and I'm looking for feedback and advice.

The main goal here is to texture a mesh created by kinfu.
For that, we store some tuples (Kinect image, Kinect pose) while kinfu runs.

Once the mesh is extracted as a ply file, we read it and store it into a TextureMesh object.

Each camera is described by its focal length (in pixels), its pose (relative to the world), and the size of its image (in pixels).
For each camera, we identify non-occluded and visible faces.
If we used N cameras, after this process, the TexturedMesh is divided into N+1 sub-meshes (the last one correspond to faces that are no visible by any camera)

For each sub-mesh, we perform UV-mapping based on the camera's characteristics.
The TextureMesh is then written as a .obj file and rendered in Meshlab.

I plan to add these functions into texture_mapping.hpp in surface. Would that be a good idea?

I am also facing several challenges:

1- Finding occlusion efficiently
As I want the occlusion-detection functions to be generic, I base my method on point clouds rather than meshes. Therefore, I use an octree to raytrace each point of the cloud toward the origin point.
I find it hard to detect occlusion accurately on planes that are almost parallel to the viewing direction. Many times, I have occlusions caused by points from the same surface.

This is illustrated on the next image, where the yellow bands (corresponding to 1 occluding point) should be red (0 occlusion)



2- Projecting Kinect Texture on pointcloud acquire with Kinect
I have a point cloud and a texture acquired by the same kinect.
When I try to apply naive UV-mapping (focal is 525 pixels, Image is 640*480), I always end up with a shift a bit similar to the one seen in http://www.pcl-users.org/Color-problem-with-kinfu-td3588110.html

As I cannot used the Openni framewor, I guess I need to find the proper transformation between the rgb and IR sensor of the Kinect.
Would anyone know where I can find this info?
Or where to find what physical point is used as reference for positioning the Kinect in Kinfu's world?
Also, any advice on the focal length to use with a Kinect?


Here as a few screenshots of what my code does so far:

different cameras


faces per camera (3 = no camera)


texture mesh in meshlab


Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Radu B. Rusu
Administrator
Raphael,


On 03/08/2012 08:40 AM, raph wrote:

> Hello all,
>
> I've been writting some functions for texture mapping on meshes, and I'm
> looking for feedback and advice.
>
> The main goal here is to texture a mesh created by kinfu.
> For that, we store some tuples (Kinect image, Kinect pose) while kinfu runs.
>
> Once the mesh is extracted as a ply file, we read it and store it into a
> TextureMesh object.
>
> Each camera is described by its focal length (in pixels), its pose (relative
> to the world), and the size of its image (in pixels).
> For each camera, we identify non-occluded and visible faces.
> If we used N cameras, after this process, the TexturedMesh is divided into
> N+1 sub-meshes (the last one correspond to faces that are no visible by any
> camera)
>
> For each sub-mesh, we perform UV-mapping based on the camera's
> characteristics.
> The TextureMesh is then written as a .obj file and rendered in Meshlab.
>
> I plan to add these functions into texture_mapping.hpp in surface. Would
> that be a good idea?

I think so. We don't have a maintainer for this stuff yet, so we're still in hacking mode until someone steps up to own
that component ;)

> 2- Projecting Kinect Texture on pointcloud acquire with Kinect
> I have a point cloud and a texture acquired by the same kinect.
> When I try to apply naive UV-mapping (focal is 525 pixels, Image is
> 640*480), I always end up with a shift a bit similar to the one seen in
> http://www.pcl-users.org/Color-problem-with-kinfu-td3588110.html
>
> As I cannot used the Openni framewor, I guess I need to find the proper
> transformation between the rgb and IR sensor of the Kinect.
> Would anyone know where I can find this info?
> Or where to find what physical point is used as reference for positioning
> the Kinect in Kinfu's world?
> Also, any advice on the focal length to use with a Kinect?

There is a fixed focal length that Suat added to OpenNIGrabber. AFAIK Patrick calibrated the Kinect and obtained similar
values.


Cheers,
Radu.
--
http://pointclouds.org

_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Radu,

Thanks for the leads, I will post about my progress here.
It's going to take me a bit of time before my code is 'good enough to commit' but I don't mind maintaining the texture_mapping part in the future.

Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Geoffrey Biggs
Hi Raph,

On Mar 14, 2012, at 1:10 AM, raph wrote:

> Radu,
>
> Thanks for the leads, I will post about my progress here.
> It's going to take me a bit of time before my code is 'good enough to
> commit' but I don't mind maintaining the texture_mapping part in the future.

It's great to see you interested in developing for PCL!

Remember that the bar for "good enough to commit" is not that high. Your code does not need to be perfect, because it won't be released straight away. The sooner you commit *working* code (even if it has bugs, if it accomplishes its task, it's working), the sooner we can all help you improve it. We're looking forward to having your work in PCL!

Geoff
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hi Geoffrey,

thanks for the encouragements, I will post a minimal version soon.

For now, my biggest challenge is to perform a correct UV-mapping on the mesh generated by kinfu, using rgb images captured during the scanning process.

The huge majority of code I saw that maps points and color of a kinect do the opposite of what I need to do:
They iterate over the depth-image, generate z from depth value, then deduce x and y from a set focal_lenght and the uv coordinates of the current pixel (as in pcl::OpenNIGrabber::convertToXYZRGBPointCloud() in openni_grabber.cpp ).


These are my current interrogations, they are highly related to texture mapping in the context of Kinfu:

- How precise is the camera pose in Kinfu? Is there a way to assess the quality of the pose-tracking at a given frame?

- Is the rgb picture already corrected? Or will I need to go through a full calibration process for the Kinect?

- If I understood correctly this post, using kinfu with the -r option forces the kinect to generate ZXY points with the RGB Camera as reference point, in spite of the default reference at the IR Camera.
But is that also the case for the camera pose relative to the mesh/world?

Thanks in advance
Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hello,

better late than never, I have just pushed my code to the repro.

This is my first commit, so feel free to rant on my coding style (I tried to follow the PCL guide) and give feedback.

I am currently cleaning a small code example that shows how to use it.

Here is what it can do with two (very close) cameras:



As you can see, the occlusion detection could be enhanced on the TV screen and the sides of the White PC.

Also, occlusion detection is very slow. I hope to come up with a new faster way of doing it. For now I just built a naive raytracing model. Feel free to modify it.


Raph


Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Radu B. Rusu
Administrator
Raphael,


On 03/20/2012 07:00 AM, raph wrote:
> Hello,
>
> better late than never, I have just pushed my code to the repro.

Awesome!

> This is my first commit, so feel free to rant on my coding style (I tried to
> follow the PCL guide) and give feedback.

I sent you a private e-mail regarding suggestions for improvement )

>
> I am currently cleaning a small code example that shows how to use it.
>
> Here is what it can do with two (very close) cameras:
> http://www.pcl-developers.org/file/n5580117/good00.png 
>
>
> As you can see, the occlusion detection could be enhanced on the TV screen
> and the sides of the White PC.

Look good! Nice job. If you get a code example/app that we can run on our machines, it'll be easier to give you feedback
on potential improvements.

>
> Also, occlusion detection is very slow. I hope to come up with a new faster
> way of doing it. For now I just built a naive raytracing model. Feel free to
> modify it.


Don't we have a fast raytracer in the octree code?

Cheers,
Radu.
--
http://pointclouds.org
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hi Radu,

about this fast raytracer in octree:

I currently perform occlusion detection as follow (and it is really too long):

for Each camera
   transform mesh into camera's frame
   add mesh into an OctreePointCloudSearch octree
   for each transformed point
        raytrace to 0,0,0 using the octree and count occlusions.

Do you know something faster in that context?

Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
I've been thinking about this problem, here are some potential improvements I consider:

A- Use VTK primitives for clipping.
In the end, I am trying to get occlusions for a point of view. I have the feeling that VTK or any other 3D renderer does that all the time. Maybe I can have access to such functionality.
Any idea where I can find something like this in the VTK library?

B- Modify the algo to behave like this:

For each face
  For each Camera
      compute occlusions by raytracing from current face's vertice to camera's origin.
      find closest camera with no occlusion (optionally, stop on the first good camera and continue)

This would involve only 1 iteration over all the faces.

Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Aitor Aldomà
Havent read everything so I might be saying something stupid...
what about generating a depthmap and then back-project whatever you
want to the depth map
to know if its occluded or not?
We are doing that all the time for self-occlusions, scene-model occlusions, etc.

cheers,
aitor

On Wed, Mar 21, 2012 at 6:10 PM, raph <[hidden email]> wrote:

> I've been thinking about this problem, here are some potential improvements I
> consider:
>
> A- Use VTK primitives for clipping.
> In the end, I am trying to get occlusions for a point of view. I have the
> feeling that VTK or any other 3D renderer does that all the time. Maybe I
> can have access to such functionality.
> Any idea where I can find something like this in the VTK library?
>
> B- Modify the algo to behave like this:
>
> For each face
>  For each Camera
>      compute occlusions by raytracing from current face's vertice to
> camera's origin.
>      find closest camera with no occlusion (optionally, stop on the first
> good camera and continue)
>
> This would involve only 1 iteration over all the faces.
>
> Raph
>
> --
> View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5583559.html
> Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

aichim
Yes, +1 on what Aitor said. You can easily project everything in the
image plane and use a structure for Z-buffering where you keep a pointer
to your initial point in the cloud.

On 3/21/12 7:50 PM, Aitor Aldomà wrote:

> Havent read everything so I might be saying something stupid...
> what about generating a depthmap and then back-project whatever you
> want to the depth map
> to know if its occluded or not?
> We are doing that all the time for self-occlusions, scene-model occlusions, etc.
>
> cheers,
> aitor
>
> On Wed, Mar 21, 2012 at 6:10 PM, raph<[hidden email]>  wrote:
>> I've been thinking about this problem, here are some potential improvements I
>> consider:
>>
>> A- Use VTK primitives for clipping.
>> In the end, I am trying to get occlusions for a point of view. I have the
>> feeling that VTK or any other 3D renderer does that all the time. Maybe I
>> can have access to such functionality.
>> Any idea where I can find something like this in the VTK library?
>>
>> B- Modify the algo to behave like this:
>>
>> For each face
>>   For each Camera
>>       compute occlusions by raytracing from current face's vertice to
>> camera's origin.
>>       find closest camera with no occlusion (optionally, stop on the first
>> good camera and continue)
>>
>> This would involve only 1 iteration over all the faces.
>>
>> Raph
>>
>> --
>> View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5583559.html
>> Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
>> _______________________________________________
>> PCL-developers mailing list
>> [hidden email]
>> http://pointclouds.org/mailman/listinfo/pcl-developers
>> http://pointclouds.org
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org

_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

fheredia
Hello Everybody,

What Aitor and Alexandru mention seems quite interesting. I'm new to this topic so there are big chances I haven't come across something like this. Could you point me in some direction on where to see these procedures applied? I suppose it is quite common from "We are doing that all the time for self-occlusions, scene-model occlusions, etc."

Thanks,
Francisco
________________________________________
From: [hidden email] [[hidden email]] On Behalf Of Alexandru Eugen Ichim [[hidden email]]
Sent: Wednesday, March 21, 2012 7:55 PM
To: Point Cloud Library (PCL) developers
Subject: Re: [PCL-developers] Texture mapping

Yes, +1 on what Aitor said. You can easily project everything in the
image plane and use a structure for Z-buffering where you keep a pointer
to your initial point in the cloud.

On 3/21/12 7:50 PM, Aitor Aldomà wrote:

> Havent read everything so I might be saying something stupid...
> what about generating a depthmap and then back-project whatever you
> want to the depth map
> to know if its occluded or not?
> We are doing that all the time for self-occlusions, scene-model occlusions, etc.
>
> cheers,
> aitor
>
> On Wed, Mar 21, 2012 at 6:10 PM, raph<[hidden email]>  wrote:
>> I've been thinking about this problem, here are some potential improvements I
>> consider:
>>
>> A- Use VTK primitives for clipping.
>> In the end, I am trying to get occlusions for a point of view. I have the
>> feeling that VTK or any other 3D renderer does that all the time. Maybe I
>> can have access to such functionality.
>> Any idea where I can find something like this in the VTK library?
>>
>> B- Modify the algo to behave like this:
>>
>> For each face
>>   For each Camera
>>       compute occlusions by raytracing from current face's vertice to
>> camera's origin.
>>       find closest camera with no occlusion (optionally, stop on the first
>> good camera and continue)
>>
>> This would involve only 1 iteration over all the faces.
>>
>> Raph
>>
>> --
>> View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5583559.html
>> Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
>> _______________________________________________
>> PCL-developers mailing list
>> [hidden email]
>> http://pointclouds.org/mailman/listinfo/pcl-developers
>> http://pointclouds.org
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org

_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hello,

Thanks for the feedback and interest.

Indeed, I considered doing something like this.
In the end, it is pretty similar to the range image of Bastian Steder.

Aitor: what kind of threshold do you use to say if a point is occluded or not?
Is it constant or adapted to the depth of the point your project.

I am afraid that far away surfaces would occlude there own self like in the picture of my first post (yellow and green vertical lines).

Do you if there is already a PCL implementation?

Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Aitor Aldomà
Hi,

we were using that to reason about occlusions before performing
hypotheses verification
over object recognition hypotheses. In that case we were using a
constant threshold
to decide if a model point is occluded by the scene.

I just commited last week a very preliminar version in trunk
(pcl_recognition/hv/occlusion_reasoning.h)
from the top of my head so the name might not exactly match.
Nevertheless, z-buffering is a well studied
problem so it should not be that hard to come up with a nice impl that
handles far away surfaces gracefully
or all kind of needed situations.

Cheers,
aitor

On Wed, Mar 21, 2012 at 11:06 PM, raph <[hidden email]> wrote:

> Hello,
>
> Thanks for the feedback and interest.
>
> Indeed, I considered doing something like this.
> In the end, it is pretty similar to the range image of Bastian Steder.
>
> Aitor: what kind of threshold do you use to say if a point is occluded or
> not?
> Is it constant or adapted to the depth of the point your project.
>
> I am afraid that far away surfaces would occlude there own self like in the
> picture of my first post (yellow and green vertical lines).
>
> Do you if there is already a PCL implementation?
>
> Raph
>
>
> --
> View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5584229.html
> Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hello,

After some little browsing, I think I will settle for irregular Z-buffer.
http://en.wikipedia.org/wiki/Irregular_Z-buffer

Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Radu B. Rusu
Administrator
Raphael,

Since we're talking about raytracing here, I hope I'm not mistaken assuming that Christian and Julius (and Justin, etc)
have a fast implementation available in the pcl_octree library, for general purpose 3D datasets. Obviously z-buffers
might be faster when you can use them.

What about the simulation component that Maurice and Hordur have been working on? It sort of seems to be that this topic
keeps on popping up on the mailing list, but from the top of my head I can't seem to remember the solutions that we end
up with :)

Cheers,
Radu.
--
http://pointclouds.org

On 03/21/2012 03:52 PM, raph wrote:

> Hello,
>
> After some little browsing, I think I will settle for irregular Z-buffer.
> http://en.wikipedia.org/wiki/Irregular_Z-buffer
>
> Raph
>
> --
> View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5584299.html
> Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hello,

a bit of news on my progress:

I'm now convinced my problem cannot be solved at 100% with z-buffer techniques. The reason is that I'm dealing with point clouds, not meshes (well, at least I want to deal with point clouds, independently of any surface reconstruction).

However, I did implement a kd-tree based irregular z-buffer, where I represent points as small patches of surface.
After projecting my points on the UV space, I select their neighbors with a radius that depends on the point's depth.
If one neighbor is closer to the camera, and sits closer than a threshold from the ray that goes from the camera to the 3D point, then the point is hidden.
It is blazing fast compared to individual ray-tracing, but is far from perfect. It handles curvatures and noise poorly.
I think I can enhance it by adapting the neighbor-ray threshold to the neighbor's depth. This seems close to the surfels concept. Is there any PCL code that adapt's their radius to their depth?

I've been investigating point-based visibility determination techniques, and I stumbled the Hidden Point Operator (HPR) described in this paper.
A later paper adapts it to noisy data sets.

I will see if I have some time to try implementing them in the coming weeks (the second one seems fairly complex to me, any help wlecomed :) ).

Cheers
Raph

Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

raph
Hello all,

I've pushed my code for texturing meshes from multiple textures in trunk.
Enclosed is a small (and dirty) example that uses the code. (example.cpp)

You can get some data here

You will need to run the code from the mesh's directory

./example mesh_decimated.ply

This should generate an obj file you can visualize with meshlab.
If you import it blender, remember to change each material's energy so textures are visible at render time.

Cheers
Raph
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Suat Gedikli
Hmm... do i miss something here or why dont you extract the perspecitvely undistorted images for each triangle and use the unit u-v coordinates (e.g. 0/0, 1/0, 0/1) for the three vertices and let vtk (opengl) do the texture mapping for you.
Since you already have triangle poses in 3d together with the closest view (RGB) you just need to find the homography that maps the projected (into RGB image) triangle to a unit square with e.g. 512px length and use that as the texture for that triangle. does this make sense... i just overflew the discussion above maybe i am missing something here.

-Suat

On Tue, Apr 3, 2012 at 8:13 AM, raph <[hidden email]> wrote:
Hello all,

I've pushed my code for texturing meshes from multiple textures in trunk.
Enclosed is a small (and dirty) example that uses the code. (
http://www.pcl-developers.org/file/n5615478/example.cpp example.cpp )

You can get some data  http://www.box.com/s/f92fb22efdda384872e2 here

You will need to run the code from the mesh's directory

./example mesh_decimated.ply

This should generate an obj file you can visualize with meshlab.
If you import it blender, remember to change each material's energy so
textures are visible at render time.

Cheers
Raph

--
View this message in context: http://www.pcl-developers.org/Texture-mapping-tp5548058p5615478.html
Sent from the Point Cloud Library (PCL) Developers mailing list archive at Nabble.com.
_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org


_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
Reply | Threaded
Open this post in threaded view
|

Re: Texture mapping

Hordur Johannsson
On Tue, 2012-04-03 at 16:19 -0700, Suat Gedikli wrote:

> Hmm... do i miss something here or why dont you extract the
> perspecitvely undistorted images for each triangle and use the unit
> u-v coordinates (e.g. 0/0, 1/0, 0/1) for the three vertices and let
> vtk (opengl) do the texture mapping for you.
> Since you already have triangle poses in 3d together with the closest
> view (RGB) you just need to find the homography that maps the
> projected (into RGB image) triangle to a unit square with e.g. 512px
> length and use that as the texture for that triangle. does this make
> sense... i just overflew the discussion above maybe i am missing
> something here.

My understanding was that the main concern was determining occlusion,
i.e. what if the triangle isn't completely visible from a single camera
view.


           /
Camera A  o
           \
                             |
           /                 |
Camera B  o                  | Triangle C
           \      +--+       |
                  |  |       |
                --+  +-------+

Triangle is fully visible in camera A and occluded in camera B.
So a possible solution is to first assign triangles to cameras
where they are fully visible, if they are not then they could
be split up. When this triangle / camera assignment is finished
then the texture mapped model can be rendered using the method
Suat suggested above.

If there are only few cameras then an alternative would be
projective texture mapping. You treat each camera as a light source
and project the camera frame on the scene. To determine occlusion
you can use shadow mapping, i.e render the scene from the camera
viewpoint and store the depth map, then when a point is colored
the distance to the camera is compared to the depth map, this determines
if it is visible in the given camera.

Thanks,
Hordur

>
> -Suat
>
> On Tue, Apr 3, 2012 at 8:13 AM, raph <[hidden email]> wrote:
>         Hello all,
>        
>         I've pushed my code for texturing meshes from multiple
>         textures in trunk.
>         Enclosed is a small (and dirty) example that uses the code. (
>         http://www.pcl-developers.org/file/n5615478/example.cpp
>         example.cpp )
>        
>         You can get some data
>          http://www.box.com/s/f92fb22efdda384872e2 here
>        
>         You will need to run the code from the mesh's directory
>        
>         ./example mesh_decimated.ply
>        
>         This should generate an obj file you can visualize with
>         meshlab.
>         If you import it blender, remember to change each material's
>         energy so
>         textures are visible at render time.
>        
>         Cheers
>         Raph
>        
>         --
>         View this message in context:
>         http://www.pcl-developers.org/Texture-mapping-tp5548058p5615478.html
>         Sent from the Point Cloud Library (PCL) Developers mailing
>         list archive at Nabble.com.
>         _______________________________________________
>         PCL-developers mailing list
>         [hidden email]
>         http://pointclouds.org/mailman/listinfo/pcl-developers
>         http://pointclouds.org
>        
>
>
> _______________________________________________
> PCL-developers mailing list
> [hidden email]
> http://pointclouds.org/mailman/listinfo/pcl-developers
> http://pointclouds.org


_______________________________________________
PCL-developers mailing list
[hidden email]
http://pointclouds.org/mailman/listinfo/pcl-developers
http://pointclouds.org
12