<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Yuze</title><link>https://yuzemedtec.com/project/</link><atom:link href="https://yuzemedtec.com/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 27 Apr 2016 00:00:00 +0000</lastBuildDate><item><title>Robotic guidance and localization during endoluminal procedures​</title><link>https://yuzemedtec.com/project/example2/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://yuzemedtec.com/project/example2/</guid><description>&lt;p>In the realm of medical endoscopy, particularly bronchoscopy, the precision of camera pose estimation is paramount. This research delves into this critical aspect, leveraging the power of deep learning to enhance the accuracy and reliability of such estimations. At the heart of this endeavor is a modified PoseNet architecture, integrated with ResNet34 for feature extraction, aiming to accurately predict a 6D Degree of Freedom (DoF) vector. This vector is crucial as it encapsulates both the position and orientation of the camera, a necessity for navigating the complex and narrow pathways of the lung.&lt;/p>
&lt;p>The research is grounded in a methodical approach, utilizing synthetic images generated from various centrelines within a 3D lung model. These images are instrumental in training the deep learning model, providing a diverse and comprehensive dataset that challenges and refines the model&amp;rsquo;s capabilities. The training process itself is a testament to the meticulous nature of the study, incorporating not just a variety of centrelines to ensure generalizability but also experimenting with sequential models. These sequential models are pivotal in maintaining temporal consistency in the predictions, a factor that cannot be overstated in the dynamic environment of bronchoscopy.&lt;/p>
&lt;p>One of the standout features of this research is the introduction and application of the Heaviside Loss function. This novel approach specifically targets the reduction of out-of-lung prediction errors, a common and critical challenge in previous methodologies. The effectiveness of this function is not merely theoretical but is quantified in the significant reduction of such errors, thereby enhancing the practical applicability of the model in real-world scenarios.&lt;/p>
&lt;p>The results of this study are both impressive and quantifiable. The enhanced accuracy in pose estimation is evident in the substantial decrease in the range of errors, as demonstrated by error histograms and per-frame error analysis. This reduction is not just in terms of numbers but also in the increased reliability of the pose estimation, a factor that directly translates to improved navigation during medical procedures. The success of the sequential models further cements this achievement, ensuring that the predictions are not only accurate but consistently so over time.&lt;/p>
&lt;p>Moreover, the comprehensive analysis of uncertainties – both aleatoric and epistemic – adds another layer of depth to the study. By calculating and combining these uncertainties, the research provides a more nuanced understanding of the model&amp;rsquo;s prediction reliability, an aspect often overlooked in similar studies.&lt;/p>
&lt;p>In conclusion, this research represents a significant advancement in the field of medical endoscopy. By harnessing the capabilities of deep learning and innovatively applying them to the challenge of camera pose estimation in bronchoscopy, the study not only achieves remarkable results in terms of accuracy and reliability but also paves the way for future advancements. It sets a precedent for further research, particularly in refining loss functions and exploring advanced optimization techniques, potentially revolutionizing the application of these techniques in clinical settings.&lt;/p></description></item><item><title>The Use of 3D Reconstruction and Virtual Reality to Support Prospective Bariatric Surgery Patients</title><link>https://yuzemedtec.com/project/example1/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://yuzemedtec.com/project/example1/</guid><description>&lt;p>As the prevalence of obesity surges, an increasing number of individuals are resorting to bariatric surgery to safeguard their health. While 3-D reconstruction and virtual reality emerge as potent tools for bolstering psychological support to these patients, thereby enhancing their body image satisfaction and the overall effectiveness of interventions, challenges persist. The prolonged nature of the reconstruction process, intricate procedures, limited accuracy in the reconstructions, and a lack of variability in the presentation of the reconstructed results have impeded the widespread adoption of this promising approach.&lt;/p>
&lt;p>This research endeavours to enhance the precision of 3-D reconstructions, streamline the duration of the reconstruction process, simplify the intricacies involved in the reconstruction procedure, and diversify the modes of presenting reconstructed outcomes.&lt;/p>
&lt;p>An armature iterative algorithm was introduced to precisely modulate the dimensions of distinct body regions to attain a targeted percentage of total weight reduction. Concurrently, a skin fold simulation algorithm was developed, harnessing the principles of a mass-spring model to emulate post-weight loss skin sagging. The study also engaged in a quantitative assessment of the 3D scanning technique’s accuracy and instituted various VR perspectives.&lt;/p>
&lt;p>Visualization was achieved for varying percentages of weight loss paired with distinct levels of skin folds. Enhanced 3D scanning techniques were identified through a quantitative analysis. A hybrid approach combining both first-person and third-person viewpoints was employed to optimize the VR experience for participants.&lt;/p>
&lt;p>By integrating the two algorithms within the 3D reconstruction phase, the overall procedure time was significantly reduced to 15% of conventional methods. This streamlined approach not only simplifies the process for researchers but also enhances the accuracy of reconstruction outcomes. Consequently, a more comprehensive representation of post-weight loss morphological changes is now attainable.&lt;/p></description></item></channel></rss>