Efficient Position-Based Deformable Colon Modeling for Endoscopic Procedures Simulation

Marcelo Martins, Lucas Morais, Rafael Torchelsen, Luciana Nedel, Anderson Maciel

Current endoscopy simulators oversimplify navigation and interaction within tubular anatomical structures to maintain interactive frame rates, neglecting the intricate dynamics of permanent contact between the organ and the medical tool. Traditional algorithms fail to represent the complexities of long, slender, deformable tools like endoscopes and hollow organs, such as the human colon, and their interaction.  In this paper, we address longstanding challenges hindering the realism of surgery simulators, explicitly focusing on these structures. One of the main components we introduce is a new model for the overall shape of the organ, which is challenging to retain due to the complex surroundings inside the abdomen. Our approach uses eXtended Position-Based Dynamics (XPBD) with a Cosserat rod constraint combined with a mesh of tetrahedrons to retain the colon’s shape. We also introduce a novel contact detection algorithm for tubular structures, allowing for real-time performance. This comprehensive representation captures global deformations and local features, significantly enhancing simulation fidelity compared to previous works. Results showcase that navigating the endoscope through our simulated colon seemingly mirrors real-world operations. Additionally, we use real-patient data to generate the colon model, resulting in a highly realistic virtual colonoscopy simulation. Integrating efficient simulation techniques with practical medical applications arguably advances surgery simulation realism.

Efficient Position-Based Deformable Colon Modeling for Endoscopic Procedures Simulation

Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation

Vismay Modi, Nicholas Sharp, Or Perel, Shinjiro Sueda, David I. W. Levin

The proliferation of 3D representations, from explicit meshes to implicit neural fields and more, motivates the need for simulators agnostic to representation. We present a data-, mesh-, and grid-free solution for elastic simulation for any object in any geometric representation undergoing large, nonlinear deformations. We note that every standard geometric representation can be reduced to an occupancy function queried at any point in space, and we define a simulator atop this common interface. For each object, we fit a small implicit neural network encoding spatially varying weights that act as a reduced deformation basis. These weights are trained to learn physically significant motions in the object via random perturbations. Our loss ensures we find a weight-space basis that best minimizes deformation energy by stochastically evaluating elastic energies through Monte Carlo sampling of the deformation volume. At runtime, we simulate in the reduced basis and sample the deformations back to the original domain. Our experiments demonstrate the versatility, accuracy, and speed of this approach on data including signed distance functions, point clouds, neural primitives, tomography scans, radiance fields, Gaussian splats, surface meshes, and volume meshes, as well as showing a variety of material energies, contact models, and time integration schemes.

Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation

Position-Based Nonlinear Gauss-Seidel for Quasistatic Hyperelasticity

Yizhou Chen, Yushan Han, Jingyu Chen, Zhan Zhang, Alex Mcadams, Joseph Teran

Position based dynamics [Müller et al. 2007] is a powerful technique for simulating a variety of materials. Its primary strength is its robustness when run with limited computational budget. Even though PBD is based on the projection of static constraints, it does not work well for quasistatic problems. This is particularly relevant since the efficient creation of large data sets of plausible, but not necessarily accurate elastic equilibria is of increasing importance with the emergence of quasistatic neural networks [Bailey et al. 2018; Chentanez et al. 2020; Jin et al. 2022; Luo et al. 2020]. Recent work [Macklin et al. 2016] has shown that PBD can be related to the Gauss-Seidel approximation of a Lagrange multiplier formulation of backward Euler time stepping, where each constraint is solved/projected independently of the others in an iterative fashion. We show that a position-based, rather than constraint-based nonlinear Gauss-Seidel approach resolves a number of issues with PBD, particularly in the quasistatic setting. Our approach retains the essential PBD feature of stable behavior with constrained computational budgets, but also allows for convergent behavior with expanded budgets. We demonstrate the efficacy of our method on a variety of representative hyperelastic problems and show that both successive over relaxation (SOR), Chebyshev and multiresolution-based acceleration can be easily applied.

Position-Based Nonlinear Gauss-Seidel for Quasistatic Hyperelasticity

ContourCraft: Learning to Resolve Intersections in Neural Multi-Garment Simulations

Artur Grigorev, Giorgio Becherini, Michael Black, Otmar Hilliges, Bernhard Thomaszewski

Learning-based approaches to cloth simulation have started to show their potential in recent years. However, handling collisions and intersections in neural simulations remains a largely unsolved problem. In this work, we present ContourCraft, a learning-based solution for handling intersections in neural cloth simulations. Unlike conventional approaches that critically rely on intersection-free inputs, ContourCraft robustly recovers from intersections introduced through missed collisions, self-penetrating bodies, or errors in manually designed multi-layer outfits. The technical core of ContourCraft is a novel intersection contour loss that penalizes interpenetrations and encourages rapid resolution thereof. We integrate our intersection loss with a collision-avoiding repulsion objective into a neural cloth simulation method based on graph neural networks (GNNs). We demonstrate our method’s ability across a challenging set of diverse multi-layer outfits under dynamic human motions. Our extensive analysis indicates that ContourCraft significantly improves collision handling for learned simulation and produces visually compelling results.

ContourCraft: Learning to Resolve Intersections in Neural Multi-Garment Simulations

Fluid Control with Laplacian Eigenfunctions

Yixin Chen, David I.W. Levin, Timothy R. Langlois

Physics-based fluid control has long been a challenging problem in balancing efficiency and accuracy. We introduce a novel physicsbased fluid control pipeline using Laplacian Eigenfluids. Utilizing the adjoint method with our provided analytical gradient expressions, the derivative computation of the control problem is efficient and easy to formulate. We demonstrate that our method is fast enough to support real-time fluid simulation, editing, control, and optimal animation generation. Our pipeline naturally supports multi-resolution and frequency control of fluid simulations. The effectiveness and efficiency of our fluid control pipeline are validated through a variety of 2D examples and comparisons.

Fluid Control with Laplacian Eigenfunctions

A Vortex Particle-on-Mesh Method for Soap Film Simulation

Ningxiao Tao, Liangwang Ruan , Yitong Deng, Bo Zhu, Bin Wang, Baoquan Chen

This paper introduces a novel physically-based vortex fluid model for films, aimed at accurately simulating cascading vortical structures on deforming thin films. Central to our approach is a novel mechanism decomposing the film’s tangential velocity into circulation and dilatation components. These components are then evolved using a hybrid particle-mesh method, enabling the effective reconstruction of three-dimensional tangential velocities and seamlessly integrating surfactant and thickness dynamics into a unified framework. By coupling with its normal component and surface-tension model, our method is particularly adept at depicting complex interactions between in-plane vortices and out-of-plane physical phenomena, such as gravity, surfactant dynamics, and solid boundary, leading to highly realistic simulations of complex thin-film dynamics, achieving an unprecedented level of vortical details and physical realism.

A Vortex Particle-on-Mesh Method for Soap Film Simulation

Proxy Asset Generation for Cloth Simulation in Games

Zhongtian Zheng, Tongtong Wang, Qijia Feng, Zherong Pan, Xifeng Gao, Kui Wu

Simulating high-resolution cloth poses computational challenges in real-time applications. In the gaming industry, the proxy mesh technique offers an alternative, simulating a simplified low-resolution cloth geometry, proxy mesh. This proxy mesh’s dynamics drive the detailed high-resolution geometry, visual mesh, through Linear Blended Skinning (LBS). However, generating a suitable proxy mesh with appropriate skinning weights from a given visual mesh is non-trivial, often requiring skilled artists several days for fine-tuning. This paper presents an automatic pipeline to convert an ill-conditioned high-resolution visual mesh into a single-layer low-poly proxy mesh. Given that the input visual mesh may not be simulation-ready, our approach then simulates the proxy mesh based on specific use scenarios and optimizes the skinning weights, relying on differential skinning with several well-designed loss functions to ensure the skinned visual mesh appears plausible in the final simulation. We have tested our method on various challenging cloth models, demonstrating its robustness and effectiveness.

Proxy Asset Generation for Cloth Simulation in Games

Real-time Physically Guided Hair Interpolation

Jerry Hsu, Tongtong Wang, Zherong Pan, Xifeng Gao, Cem Yuksel, Kui Wu

Strand-based hair simulations have recently become increasingly popular for a range of real-time applications. However, accurately simulating the full number of hair strands remains challenging. A commonly employed technique involves simulating a subset of guide hairs to capture the overall behavior of the hairstyle. Details are then enriched by interpolation using linear skinning. Hair interpolation enables fast real-time simulations but frequently leads to various artifacts during runtime. As the skinning weights are often pre-computed, substantial variations between the initial and deformed shapes of the hair can cause severe deviations in fine hair geometry. Straight hairs may become kinked, and curly hairs may become zigzags. This work introduces a novel physical-driven hair interpolation scheme that utilizes existing simulated guide hair data. Instead of directly operating on positions, we interpolate the internal forces from the guide hairs before efficiently reconstructing the rendered hairs based on their material model. We formulate our problem as a constraint satisfaction problem for which we present an efficient solution. Further practical considerations are addressed using regularization terms that regulate penetration avoidance and drift correction. We have tested various hairstyles to illustrate that our approach can generate visually plausible rendered hairs with only a few guide hairs and minimal computational overhead, amounting to only about 20% of conventional linear hair interpolation. This efficiency underscores the practical viability of our method for real-time applications.

Real-time Physically Guided Hair Interpolation

Super-Resolution Cloth Animation with Spatial and Temporal Coherence

Jiawang Yu, Zhendong Wang

Creating super-resolution cloth animations, which refine coarse cloth meshes with fine wrinkle details, faces challenges in preserving spatial consistency and temporal coherence across frames. In this paper, we introduce a general framework to address these issues, leveraging two core modules. The first module interleaves a simulator and a corrector. The simulator handles cloth dynamics, while the corrector rectifies differences in low-frequency features across various resolutions. This interleaving ensures prompt correction of spatial errors from the coarse simulation, effectively preventing their temporal propagation. The second module performs mesh-based super-resolution for detailed wrinkle enhancements. We decompose garment meshes into overlapping patches for adaptability to various styles and geometric continuity. Our method achieves an 8× improvement in resolution for cloth animations. We showcase the effectiveness of our method through diverse animation examples, including simple cloth pieces and intricate garments.

Super-Resolution Cloth Animation with Spatial and Temporal Coherence

Neural-Assisted Homogenization of Yarn-Level Cloth

Xudong Feng, Huamin Wang, Yin Yang, Weiwei Xu

Real-world fabrics, composed of threads and yarns, often display complex stress-strain relationships, making their homogenization a challenging task for fast simulation by continuum-based models. Consequently, existing homogenized yarn-level models frequently struggle with numerical stability without line search at large time steps, forcing a trade-off between model accuracy and stability. In this paper, we propose a neural-assisted homogenized constitutive model for simulating yarn-level cloth. Unlike analytic models, a neural model is advantageous in adapting to complex dynamic behaviors, and its inherent smoothness naturally mitigates stability issues. We also introduce a sector-based warm-start strategy to accelerate the data collection process in homogenization. This model is trained using collected strain energy datasets and its accuracy is validated through both qualitative and quantitative experiments. Thanks to our model’s stability, our simulator can now achieve two-orders-of-magnitude speedups with large time steps compared to previous models.

Neural-Assisted Homogenization of Yarn-Level Cloth