3D pointset deformation controlled by shape properties is still a challenging task in shape modelling. Existing neural networks learn point-wise feature vectors, then predict point displacements to deform 3D shapes. However, these solutions often learn features independently between points, i.e. without considering neighborhood constraints. In this paper, we propose a deep learning architecture named Controlled Point Deformation Network (CPDNet), which exploits shape properties to predict a postoperative spine shape as the outcome of corrective surgery to treat scoliosis. CPDNet learns the rigid transformations between the 3D landmarks of consecutive vertebrae in the spine. Point-wise feature vectors are extracted from the 3D preoperative spine and concatenated with patients’ selected clinical metadata using a fully convolutional network. Then, 3D point-wise displacement vectors are predicted and added to the input points to obtain the postoperative spine shape. Geometric shape loss computes the differences between the 3D geometric coordinates of the predicted and target spine shapes. Rigid transformation loss computes the differences in rotation and translation between consecutive vertebrae, leading the network to learn spinal shape properties. We trained and validated our model on 99 patients who previously underwent posterior spinal fusion surgery. On the test set, our model achieves average errors of 1.5°, 7.6°, and 4.9° for three clinical indices, namely coronal balance and Cobb angles in the sagittal and coronal planes, respectively, outperforming the state-of-the-art P2P-NET model. Our model could serve to develop a surgical planning tool for scoliosis treatment, allowing surgeons and patients to visualize the predicted result of spinal surgery.
|