No. | Content |
---|---|
1 | 【ShapeWorks】1. 软件介绍及安装 |
2 | 【ShapeWorks】2. 工作流的三板斧 - How to Groom Your Dataset? |
3 | 【ShapeWorks】3. 工作流的三板斧 - How to Optimize Your Shape Model? |
4 | 【ShapeWorks】4. 工作流的三板斧 - How to Analyze Your Shape Model? |
5 | 【ShapeWorks】5. 典型例子 Ellipsoid 的运行及解析 |
ShapeWorks needs suitable distance transforms or meshes for establishing shape correspondence. The groom stage has the pipeline to generate aligned distance transforms from binary segmentations or groomed meshes from unaligned meshes. Common grooming steps are outlined below. For descriptions of the ShapeWorks commands used, see: ShapeWorks Commands.
This grooming step resamples all the binary volumes, whch in a raw setting could be in different physical spaces (different dimensions and voxel sapcing). This grooming step brings all segmentations to the same voxel spacing, typically isotropic spacing (e.g., 1,1,1).
A smaller voxel spacing than the original spacing improves the resolution of the sementations and reduces the aliasing (i.e., staircase) artifact resulting from the thresholding/binarization process.
Resampling both images and segmentations
If your dataset contains both images (e.g., CTs, MRIs) and binary segmentations, it is recommended thatresampling
is performed on both to keep them aligned for subsequent analyses that might entail/need imaging data.
Since image resampling entails interpolation, directly resampling binary segmentations will not result in a binary segmentation, but rather an interpolated version that does not have two distinct labels (i.e., foreground and background).
To mitigate this behavior, we need first to convert the binary segmentations (with zero-one voxels) to a continuous-valued (gray-scale) image. This can be done by either anti-aliasing the segmentations, which smooths the foreground-background interface, or converting a segmentation to a signed distance transform, where each voxel encodes the physical distance to the closest surface point (zero-one interface) with the sign indicating whether the voxel is inside or outside the foreground region.
Here is a resampling pipeline example for binary segmentation that uses anti-aliasing:
antialias
the binary segmentation to convert it to a smooth continuous-valued imageresample
the antialiased image using the same (and possible smaller) voxel spacing for all dimensionsbinarize
(aka thersholding) the resampled image to results in a binary segmentation with the desired voxel spacingHere is an example of resamping an ellipsoid with spacing (1,1,2) to have spacing (1,1,1):
antialias_iterations = 30
shape_seg = sw.Image(in_shape_filename)
shape_seg.antialias(antialias_iterations)
shape_seg.resample([1,1,1], sw.InterpolationType.Linear)
shape_seg.binarize().write(out_shape_filename)
Resampling image
Images are already given as a continued-valued grid of pixels. Hence, images can be directly resampled without any pre- or post-processing steps.
Resampling images reduces pixelation and smooths out intensity noise.
Resampling segmentations smooths out shape boundaries and reduces binarization aliasing.
Rigidly aligning a cohort of shapes entail removing differences across these shapes pertaining to global transformations, i.e., translation and rotation. This step requires a reference coordinate frame to align all shapes to, where one of the shapes can be selected as a reference.
Rigid alignment (aka registration) is an optimization process that might get stuck in a bad local minima if shapes are significantly out of alignment. To bring shapes closer, we can remove translation differences using center-of-mass alignment. This factors out translations to reduce the risk of misalignment and allow for a medoid sample to be automatically selected as the reference for subsequent rigid alignment.
Applying transformation segmentations
Applying a transformation to segmentations entails interpolation due to image resampling in the new coordinate frame. Similar to the resampling workflow, we will first anti-alias the segmentation to convert it to a continuous-valued image with a smooth foreground-background interface, then apply the transformation, and finally binarize the tranformed image.
Hence, the shapes alignment pipeline includes the following steps:
Center-of-mass alignment for segmentations:
– antialias
the binary segmentation to convert it to a smooth continuous-valued image
– translate
the binary segmentation so that the center of the image doamin is the center of mass of the shape.
– binarize
(aka thresholding) to get a binary segmentation
recenter
moves the center of the image (which is now the center of mass) to (0,0,0)
Reference shape selection: One option for a reference is to select the shape that is closest to all other samples in the given cohort, i.e., the medoid shape. If shape instances are misaligned (i.e., do not share the same coordinate frame), translational and rotational differences should be factored out before reference selection.
– Use the pymodule function find_reference_image_index that perform pairwise rigid registration using the iterative closest point method and selects the sample that is closest to all other samples after factoring out global transformation differences.
Rigid alignment:
– antialias
the binary segmentation and reference to convert them to a smooth continuous-valued image
– createTransform
: compute the rigid transformation parameters that would align a segmentation to the reference shape
– applyTransform
: apply the rigid transformation to the segmentation and make it have the same cooridnate system as the reference
– binarize
(aka thresholding) to get a binary segmentation
Here is an example of performing center of mass alignment on one ellipsoid:
shape_seg = sw.Image(in_shape_filename)
antialias_iterations = 30
translation_vector = shape_seg.center() - shape_seg.centerOfMass()
shape_seg.antialias(antialias_iterations)
.translate(translation_vector)
.binarize().recenter()
.write(out_shape_filename)
In some cases, binary segmentations need to be clipped with a cutting plane so that only the desired part of the shape is reflected in the shape model. To perform this step, you can use clip defined the cutting plane defined using three points.
In many cases, image boundaries are not tight around shapes. This leaves too much irrelevant background voxels that might increase the memory footprint when optimizing the shape model. We can remove this irrelevant background while keeping our segmentations intact and avoid cropped segmentations to touch image boundaries, which results in artifical holes in the shape boundary and does not allow particles to be distributed in regions touching the image boundary.
For numerical computations for correspondences optimization, we need to convert binary segmentations to a continuous-valued image that satisfies the following requirements.
It is common in medical imaging data to have a left and right anatomy. To align and model all such shapes, we must reflect some meshes so that all are oriented the same.
reflect
: reflects the mesh across the given axis (typically x-axis for anatomy)Here is an example of reflecting a mesh:
shape_mesh = sw.Mesh(in_mesh_filename)
shape_mesh.reflect(sw.X).write(out_mesh_filename)
Meshes can be converted to binary segmentations if desired so that grooming can be done on segmentations and optimization on distance transforms.
The steps to convert meshes to volumes are:
toImage
convert the mesh to a signed distance transformbinarize
(aka thresholding) to get a binary segmentationExample of turning a mesh to a segmentation:
shape_mesh = sw.Mesh(in_mesh_filename)
shape_seg = shape_mesh.toImage()
.binarize()
.write(out_shape_filename)
For list of commands, check out ShapeWorks Commands
Remeshing creates meshes with evenly spaced vertices. - remeshPercent
remeshes the mesh to have a given percent of the current number of vertices
mesh.remeshPercent(percentage=0.80, adaptivity=1.0)
Rigidly aligning a cohort of shapes entails removing differences across these shapes pertaining to global transformations, i.e., translation and rotation. This step requires a reference coordinate frame to align all shapes to, where one of the shapes can selected as a reference.
In this step, we ingest the two original shapes and the output consists of three new shapes, two of which correspond to the original shapes and one for the shared boundary. Let us designate the original meshes as Lo and Ro. Then:
extracted_l, extracted_r, extracted_s = sw.MeshUtils.shareBoundaryExtractor(mesh_l, mesh_r, tol)
The boundary loop of the shared surface M obtained using the sharedBoundaryExtractor
is computed.
output_contor = sw.MeshUtils.boundaryLoopExtractor(extracted_shared_meshes)
Laplacian Smoothing allows you to reduce noise on a mesh’s surface with minimal changes to its shape. The effect is to “relax” the mesh, making the cells better shaped and the vertices more evenly distribued.
mesh.smooth(iterations, relaxation)