Alex Steele ’28
As machine-learning based technologies begin to be incorporated into the delivery of medical care, adoption has varied across specialties. Some areas lend themselves well to being aided by computer-driven algorithms, while others have less aspects that can be easily addressed or readily augmented with artificial intelligence programs. Among the various medical fields, neurosurgery sits at the forefront of machine-learning (ML) implementation for two key reasons.
Firstly, neurosurgery demands extremely high spatial precision and real-time decision-making. Neurosurgical procedures typically involve manipulating or resecting tissues in anatomical regions that are delicate and compact, as well as potentially dynamic since the brain can shift and deform mid-operation once the dura mater is opened. As a result of these anatomical features, small errors can damage critical neural infrastructure, and conventional navigation based solely on static preoperative imaging has potentially dangerous limitations if tissue movement occurs. Surgical microscopes can provide real-time visualization, but their two-dimensional visual output lacks depth and can be insufficient as neurosurgeons make critical decisions, such as the placement of catheters or navigation around blood vessels and other fine structures (Lim, 2025). The addition of ML-enhanced tools has the ability to provide surgeons with more comprehensive and precise intraoperative support as they perform intricate procedures, making the potential benefit of this novel technology very significant (Munir, 2025).
Secondly, the development of ML tools for neurosurgery is critically enabled by the generation of exceptionally rich intraoperative data. Many neurosurgical procedures, especially those that are minimally invasive, use endoscopes or exoscopes to provide continuous video streams of the operative field. These video streams can serve as plentiful data sources for processing by ML models as they develop and improve at anatomy segmentation, instrument detection, and other skills. Additionally, neurosurgery often employs more advanced imaging techniques intraoperatively, including ultrasound, MRI, or CT imaging (Nimsky & Carl, 2017). The amount and diversity of imaging data collected during neurosurgery means that ML models have a wealth of data that can be used for their training. These large data sets have historically been too unwieldy to effectively utilize, but new models are tapping these imaging resources to develop intraoperative image analysis capabilities.
One application of ML in neurosurgery that has particular promise is in augmented reality (AR) technology, which integrates digital information by overlaying content onto a user’s real environment, thereby enriching the surgeon’s perception of reality (Hayes & Downie, 2024). In the surgical setting, AR has the potential to provide precise, and previously inaccessible, intraoperative assistance. In a recent example, researchers from Tulane School of Medicine used a headset-based AR system to guide placement of an external ventricular drain (EVD) in a patient with complex cranial anatomy and midline shift (Janssen et al., 2024). EVD placement is a critical neurosurgical procedure that involves inserting a catheter into the brain’s ventricular system to relieve intracranial pressure by draining cerebrospinal fluid. The procedure can be risky, since misplacement can lead to surgical complications, and it relies heavily on the surgeon’s experience since it is traditionally performed without navigation (Olexa et al., 2022). The system used in this case overlaid a 3D model of the patient’s unique cranial anatomy, derived from preoperative imaging onto the patient’s head, providing the surgeon with “targeted trajectories” via a much simpler system than traditional neuronavigation (Janssen et al., 2024). This case demonstrates the feasibility of AR in time-sensitive neurosurgical tasks as it allows procedures that traditionally are landmark- and experience-dependent to be augmented by real-time spatial visualization. This shift holds promise for increasing precision and reducing variability in outcomes.
Even more advanced systems are being developed to combine AR with real-time intraoperative imaging modalities and ML-based tissue classification, allowing surgeons to visualize not only structure but also tissue properties and pathological boundaries. For example, the system SLIMBRAIN captures hyperspectral imaging data during tumor resection at ~14 frames per second, uses ML classifiers to distinguish tumor from normal brain tissue in real time, and projects those classification results via AR onto a 3D point cloud of the surgical field (Sancho et al., 2023). In this manner, the surgeon has live, color-coded overlays, such as a highlighted tumor region, projected on the surgical field without interrupting resection. Furthermore, SLIMBRAIN’s multimodal design and depth-aware AR rendering provide spatially accurate overlays that capture three-dimensional geometry, thereby elevating the surgeon’s view from a two-dimensional camera feed (Martín-Pérez et al., 2025). Initial performance of the SLIMBRAIN system has been very promising, and its potential applications include a broad array of neurosurgical procedures.
Another promising frontier for ML in neurosurgery lies in enhancing existing intraoperative imaging techniques that are currently in use but suffer from low interpretability, high levels of noise, or operator-dependence. For example, intraoperative ultrasound (ioUS) is a technology that offers real-time imaging for glioma segmentation, which is critical for the identification of tumor boundaries and sub-regions (Grubert et al., 2019). While portable and cost-effective, this technology has had limited clinical utility due to image artifacts, variability in acquisition angles, and difficulty in manual interpretation. A recent study from Cepeda et al. (2025) demonstrated the feasibility of using a convolutional neural network to automatically perform glioma segmentation with promising accuracy, suggesting that ML-based segmentation can significantly improve the interpretability and reliability of ioUS in the operating room. If translated into real-time clinical settings, automated segmentation could assist surgeons in delineating tumor boundaries more precisely during resection, thereby reducing reliance on subjective interpretation and potentially improving the completeness of tumor removal. These examples highlight a few of many ways in which AI is currently being tested and implemented to assist neurosurgical procedures.
Neurosurgery is certainly not alone among medical specialties in its development of ML-based technologies for use in clinical settings. For example, in otolaryngology (ENT), researchers have developed ML-powered AR overlays driven by CT imaging and intraoperative endoscope video to provide trajectories to surgeons during sinus operations (Citardi et al., 2015). Additionally, in urology, the deep learning model HRNetV2 has demonstrated strong performance in intelligent bladder lesion detection during cystoscopy, showing potential to improve detection accuracy in clinical practice, which is crucial for early tumor diagnosis (Ye et al., 2025). Finally, colorectal surgeons and gastroenterologists are investigating the ability of computer-aided detection systems to increase the detection rates of precancerous adenomas during colonoscopy (Park et al., 2024). Utilization of these AI-assisted programs holds significant promise to enhance cancer detection and facilitate earlier intervention.
Overall, the use of ML intraoperatively represents a rapidly expanding frontier in surgical innovation, with neurosurgery emerging as one of its most active and promising testing grounds. As illustrated by AR-guided navigation systems, real-time tissue classification platforms such as SLIMBRAIN, and ML-enhanced interpretation of intraoperative imaging, these technologies have the potential to meaningfully augment surgeon perception, reduce variability, and improve the precision of complex procedures. At the same time, concerns remain regarding the risks of overreliance on algorithmic guidance and surgeon deskilling, as automation may erode critical human expertise if it is used without adequate oversight and training (Hmido et al., 2025). Additionally, there are also unresolved questions of liability when ML-assisted decisions contribute to adverse outcomes, as current legal frameworks struggle to assign responsibility among clinicians, health systems, and device manufacturers (Cestonaro et al., 2023). Issues of transparency, validation across institutions, and integration into existing surgical workflows also pose critical barriers to widespread adoption. Nevertheless, given the magnitude of the potential benefit of improving clinical outcomes, the incentive to refine and responsibly deploy ML-based intraoperative tools remains strong. With continued interdisciplinary collaboration, rigorous clinical validation, and strategic governance, intraoperative ML is poised to become an increasingly important tool in achieving safer and more effective care.
Alex Steele is a staff writer at The Princeton Medical Review. He can be reached at as3034@princeton.edu.
References
Cepeda, S., Esteban-Sinovas, O., Singh, V., Shetty, P., Dixon, L., Weld, A., Camp, S., Giammalva, G. R., Bene, M. D., Barbotti, A., DiMeco, F., West, T. R., Nahed, B. V., Romero, R., Arrese, I., Hornero, R., Sarabia, R., Moiyadi, A., Anichini, J., & Giannarou, S. (2025). Deep learning-based glioma segmentation of 2D intraoperative ultrasound images: A multicenter study using the brain tumor intraoperative ultrasound database (BraTioUS). Cancers, 17(2), 315. https://doi.org/10.3390/cancers17020315
Cestonaro, C., Delicati, A., Marcante, B., Caenazzo, L., & Tozzo, P. (2023). Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Frontiers in Medicine, 10,1305756. https://doi.org/10.3389/fmed.2023.1305756
Citardi, M. J., Agbetoba, A., Bigcas, J.-L., & Luong, A. (2015). Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. International Forum of Allergy & Rhinology, 6(5), 523–528. https://doi.org/10.1002/alr.21702
Grubert, R. M., Tibana, T. K., Marchiori, E., Kadri, P. A. do S., & Nunes, T. F. (2019). Intraoperative ultrasound for identifying residual tumor during glioma surgery. Radiologia Brasileira, 52(5), 312–313. https://doi.org/10.1590/0100-3984.2018.0046
Hmido, S. B., Rahim, H. A., Keller, B., Schakel, M., Nieveen van Dijkum, E. V. M., Rainey, S., Bak, M., Daams, F., Goslings, J. C., Kazemier, G., & Ploem, C. (2025). Ethical pitfalls in AI‐based predictive models in surgery. World Journal of Surgery, 49(10). https://doi.org/10.1002/wjs.70080
Janssen, A., Wang, A., Dumont, A. S., & Delashaw, J. (2024). Augmented reality-guided external ventricular drain placement: A Case Report. Cureus, 16(7). https://doi.org/10.7759/cureus.64403
Lim, R. (2025). Neuronavigation advances: Precision, safety, patient outcomes. Journal of Advanced Surgical Research, 9(3). https://doi.org/10.35841/2591-7765-9.3.218
Martín-Pérez, A., Villa, M., Rosa Olmeda, G., Sancho, J., Vazquez, G., Urbanos, G., Martinez de Ternero, A., Chavarrías, M., Jimenez-Roldan, L., Perez-Nuñez, A., Lagares, A., Juarez, E., & Sanz, C. (2025). SLIMBRAIN database: A multimodal image database of in vivo human brains for tumour detection. Scientific Data, 12(1). https://doi.org/10.1038/s41597-025-04993-y
Munir, K. (2025). Integrating AI into neurosurgical decisions: a new frontier in medicine. The Egyptian Journal of Neurosurgery, 40(1). https://doi.org/10.1186/s41984-025-00412-x
Nimsky, C., & Carl, B. (2017). Historical, Current, and Future Intraoperative Imaging Modalities. Neurosurgery Clinics of North America, 28(4), 453–464. https://doi.org/10.1016/j.nec.2017.05.001
Olexa, J., Cohen, J., Alexander, T., Brown, C., Schwartzbauer, G., & Woodworth, G. F. (2022). Expanding educational frontiers in neurosurgery: Current and future uses of augmented reality. Neurosurgery, 92(2), 241–250. https://doi.org/10.1227/neu.0000000000002199
Park, D. K., Kim, E. J., Im, J. P., Lim, H., Lim, Y. J., Kim, K. O., Chung, J.-W., Kim, Y. J., & Byeon, J.-S. (2024). A prospective multicenter randomized controlled trial on artificial intelligence assisted colonoscopy for enhanced polyp detection. Scientific Reports, 14(1). https://doi.org/10.1038/s41598-024-77079-1
Sancho, J., Villa, M., Chavarrías, M., Juarez, E., Lagares, A., & Sanz, C. (2023). SLIMBRAIN: Augmented reality real-time acquisition and processing system for hyperspectral classification mapping with depth information for in-vivo surgical procedures. Journal of Systems Architecture, 140, 102893. https://doi.org/10.1016/j.sysarc.2023.102893
Ye, Z., Li, Y., Sun, Y., He, C., He, G., & Ji, Z. (2025). Leveraging deep learning in real-time intelligent bladder tumor detection during cystoscopy: A diagnostic study. Annals of Surgical Oncology, 32(5), 3220–3226. https://doi.org/10.1245/s10434-025-17015-3

Leave a Reply