Deep learning and AI-based systems are revolutionizing healthcare and biotechnology around the world. They have the potential to detect anomalies and critical triage cases and predict outcomes much faster and more accurately than seasoned experts using standard imagery.
Spinal radiography is one area where deep learning can help advance medicine significantly and be a valuable tool for health and care staff.
Due to the complicated and often obscure nature of spinal conditions, standard imaging procedures such as MRIs can often be too time-consuming and inaccurate to conduct in a primary care setting. This is where deep learning and AI can help.
What is deep learning?
Machine learning (ML) is a subset of artificial intelligence (AI). It allows computers to form their knowledge and behavior without being explicitly programmed. Deep learning is a subset of ML where artificial neural networks (ANNs) emulate how a human brain learns new information.
ML allows self-driving cars to detect the difference between a pedestrian, a lamp post, a traffic light, and a full moon in the night sky. It’s also what systems like Google and Alexa use to understand human language or find meaning in billions of data points.
ML can find anomalous patterns in MRI and CT scans in the medical field that only a trained human expert can spot. For example, research indicates that AI-assisted radiologists can reduce false negatives in cancer screenings.
Detection of carcinomas, liver cirrhosis, rare diseases, and bone fractures in X-ray images is positioned to increase speed and accuracy thanks to these emerging technologies.
Computer systems can use information like a patient’s symptoms and demographics to predict pathologies, recommend treatment and hospital release procedures, and predict survival time in terminal cases.
One type of ANN is the convolutional neural network (CNN). It’s programmed like the human visual cortex and heightens the ability to detect image patterns. It does this by filtering or “convolving” raw input data to find thousands of visual features to classify the image.
This allows it to scan a fingerprint to determine someone’s identity or find dogs in a photograph of multiple types of animals.
A typical ANN requires that features like limb length, postural dimensions, nose width, and eye shape be explicitly specified. On the other hand, a CNN will automatically find these using advanced technology.
How deep learning can improve spinal procedures
AI is one of many assistive technologies disrupting cleanroom facilities. For example, computer-assisted imaging, navigation, motion capture, robotic surgery, digital X-ray machines, and in-surgery augmented reality (AR) empower hospital staff.
Here are some ways deep learning has been successfully implemented in spinal care:
- Spinal radiographs can successfully evaluate the severity of adolescent idiopathic scoliosis (AIS). Cobb angle and flexion can be detected based on MRI scans. Even with a photo-based diagnosis, AI was slightly superior to human experts. Another study based on non-invasive 3D scans of the trunk surface showed 72% accuracy in scoliosis classification.
- It can successfully grade lumbar lordosis.
- The osteoporotic spine can be successfully predicted based on bone mineral density (BMD) tests for fusion surgery. In addition, deep neural networks (DNNs) also showed good potential for predicting infections after posterior spinal fusion.
- Lumbar fractures can be detected with over 94% sensitivity and specificity based on DEXA scans. Another study based on CT scans achieved 99% sensitivity in vertebral compression fractures.
- After lumbar stenosis surgery, AI can help select patients that can safely return home versus those who need placement in a rehabilitation facility.
- It can forecast improvement rates for evaluating the necessity of surgery for degenerative cervical myelopathy.
- DNNs have good predictive power for cervical spondylotic myelopathy.
- CNNs can differentiate between spinal schwannomas and meningiomas and tuberculous and pyogenic spondylitis using MRI.
- Data-driven ML models showed an excellent prediction of radiographic progression in axial spondyloarthritis.
- It can foretell risks and complications for blood transfusion following adult spinal deformity surgery with 88% accuracy.
- It can use MRIs to analyze the texture of intervertebral disks and endplate zones to recognize causes of lower back pain and detect lumbar spinal canal stenosis.
- Machine learning algorithms performed with 80% accuracy in disease diagnosis based on walking gait.
- Supervised neural networks showed good results in segmenting the gray and white matter of the spinal cord.
Using deep learning for image processing and diagnosis
Deep learning in healthcare aims to reduce inaccuracies and streamline otherwise time-consuming manual tasks. While tools like a healthcare CRM can accelerate operational efficiency, AI can complement processes by augmenting certain tasks or assuming them entirely. Let’s look at how self-learning software works its magic.
Data collection
Deep learning starts with access to a data pool. The sheer volume of images is what fine-tunes its accuracy. It isn’t uncommon for it to analyze thousands or even tens of thousands of scans in a single study. Fortunately, major hospitals produce hundreds of scans every day. So there’s no shortage of images to help train the algorithm.
Data preprocessing
Basic operations like resolution, cropping, rotation, sharpening, and contrast are performed on the image batch to improve results. Of course, the AI can do these operations internally if it leads to more reliable results, but preparing the images for the AI before feeding them to it can give the computer a leg up.
Doing this beforehand also ensures the variables tagged with the images are standardized, so all images are equivalent data-wise.
Image segmentation
Here, the AI pinpoints separate objects — like a cat vs. a dog vs. a tortoise — in the image and outlines them with a boundary.
Feature engineering
Features are small distinguishable motifs that occur in an image. Feature extractors simplify raw pixels into representative data by finding those shared across the dataset.
In feature selection, features are iteratively combined, abstracted, and cast away to keep only the most useful ones.
Model selection
DNNs and CNNs are popular now, but classic techniques like Random Forest, k-nearest-neighbor (K-NN), support vector machines (SVM), decision trees, and naive Bayes classifiers may do just as well.
The one depends on factors like dataset size, feature depth, task complexity, maintainability, and available computing power.
Training
In its training phase, the system learns to produce accurate results — not only for the current image repository but also for data to be presented in the future.
To do this, the dataset is split into three different sets:
-
- Training set for developing ML models.
- Cross-validation is set to score and select the best model.
- Finally, a test is set to predict outcomes.
Here the data scientist tweaks the model’s internal knobs in hyperparameter tuning. The number of neurons, activation function, learning rate, hidden layers, and iterations (epochs) are examples of hyperparameters in a DNN. This prevents errors like overfitting and underfitting.
Model evaluation
Here, the data engineer decides the form the answer from the ML model should take. For example, it’s not enough to know how many subjects have spinal deformities. You also want to learn the true and false positives and negatives ratio.
With the chosen metrics comes an appropriate visualization chart. For example, a confusion matrix is a colored table that shows the response classes. Other options include graphs like the Area Under Curve (AUC) and Receiver Operating Characteristic curve (ROC).
Deployment
The model is ready for operation. However, remember that the quality may shift over time because new data may vary from the dataset used in the initial training. So it’s important to keep monitoring the model for potential retraining and redeployment.
AI’s role in the future of spinal care
While AI can execute high-level medical tasks, it can’t comprehend the impact of a wrong decision. For example, although human doctors understand “scanxiety,” many patients experience medical scans and the human cost that can result from a mistake; computers are, of course, robotic.
An artificial brain can also develop biases regarding demographics, just as a human brain can. Computers can even display “reward hacking,” where they manipulate the system to get around its initial intent so they can be rewarded for good behavior.
Because AI doesn’t always do what it’s instructed to, it can be challenging to accept AI as part of the workforce. And in many cases, human experts are still better at spotting certain conditions. However, AI can be a valuable second opinion. It can also preserve the total biopsychosocial picture of the patient’s case.
Technology is at a crossroads moment. There are heaps of data, but it can be difficult for humans to make sense of it. Deep learning can analyze larger datasets and provide additional information to speed up the referral process. AI is reliable across all medical scans, even those prone to human error.
This saves time for both doctor and the patient. It can help the patient get the right treatment faster, and it can help reduce the $110 billion annual expense of spinal care in the U.S.
It’s unlikely that we’ll see a fully autonomous synthetic surgeon anytime soon, but deep learning AI will streamline the spinal care field and play an essential role in preventing misdiagnosis.