Categories
artificial intelligence Brain Cancer Laser (Light Amplification by Stimulated Emission of Radiation) Nature Medicine (Journal) Orringer, Daniel A Surgery and Surgeons Tissue (Human) Tumors Uncategorized your-feed-healthcare

A.I. Comes to the Operating Room

Brain surgeons are bringing artificial intelligence and new imaging techniques into the operating room, to diagnose tumors as accurately as pathologists, and much faster, according to a report in the journal Nature Medicine.

The new approach streamlines the standard practice of analyzing tissue samples while the patient is still on the operating table, to help guide brain surgery and later treatment.

The traditional method, which requires sending the tissue to a lab, freezing and staining it, then peering at it through a microscope, takes 20 to 30 minutes or longer. The new technique takes two and a half minutes. Like the old method, it requires that tissue be removed from the brain, but uses lasers to create images and a computer to read them in the operating room.

“Although we often have clues based on preoperative M.R.I., establishing diagnosis is a primary goal of almost all brain tumor operations, whether we’re removing a tumor or just taking a biopsy,” said Dr. Daniel A. Orringer, a neurosurgeon at N.Y.U. Langone Health and the senior author of the report.

In addition to speeding up the process, the new technique can also detect some details that traditional methods may miss, like the spread of a tumor along nerve fibers, he said. And unlike the usual method, the new one does not destroy the sample, so the tissue can be used again for further testing.

The new process may also help in other procedures where doctors need to analyze tissue while they are still operating, such as head and neck, breast, skin and gynecologic surgery, the report said. It also noted that there is a shortage of neuropathologists, and suggested that the new technology might help fill the gap in medical centers that lack the specialty.

Video

Video player loading
Doctors demonstrated acquiring a tissue sample and putting it through the AI pipeline to test for brain cancer. Video by Michigan MedicineCreditCredit…Michigan Medicine

Algorithms are also being developed to help detect lung cancers on CT scans, diagnose eye disease in people with diabetes and find cancer on microscope slides. The new report brings artificial intelligence — so-called deep neural networks — a step closer to patients and their treatment.

The study involved brain tissue from 278 patients, analyzed while the surgery was still going on. Each sample was split, with half going to A.I. and half to a neuropathologist. The diagnoses were later judged right or wrong based on whether they agreed with the findings of lengthier and more extensive tests performed after the surgery.

The result was a draw: humans, 93.9 percent correct; A.I., 94.6 percent.

The study was paid for by the National Cancer Institute, the University of Michigan and private foundations. Dr. Orringer owns stock in the company that made the imaging system, as do several co-authors, who are company employees. He conducted the research at the University of Michigan, before moving to New York.

“Having an accurate intra-operative diagnosis is going to be very useful,” said Dr. Joshua Bederson, the chairman of neurosurgery for the Mount Sinai Health System, who was not involved in the study. He added, “I think they understated the significance of this.”

He said the traditional method of examining tissue during brain surgery, called a frozen section, often took much longer than 30 minutes, and was often far less accurate than it was in the study. At some centers, he said, brain surgeons do not even order frozen sections because they do not trust them and prefer to wait for tissue processing after the surgery, which may take weeks to complete.

“The neuropathologists I work with are outstanding,” Dr. Bederson said. “They hate frozen sections. They don’t want us to make life-altering decisions based on something that’s not so reliable.”

Dr. Bederson said that the study authors had set a very high bar for their new technique by pitting it against experts at three medical centers renowned for excellence in neurosurgery and neuropathology: Columbia University in New York, the University of Miami and the University of Michigan, Ann Arbor.

“I think that what happened with this study is that because they wanted to do a good comparison, they had the best of the best of the traditional method, which I think far exceeds what’s available in most cases,” Dr. Bederson said.

The key to the study was the use of lasers to scan tissue samples with certain wavelengths of light, a technique called stimulated Raman histology. Different types of tissue scatter the light in distinctive ways. The light hits a detector, which emits a signal that a computer can process to reconstruct the image and identify the tissue.

The system also generates virtual images similar to traditional slides that humans can examine.

The researchers used images from tissue from 415 brain surgery patients to train an artificial intelligence system to identify the 10 most common types of brain tumor.

Some types of brain tumor are so rare that there is not enough data on them to train an A.I. system, so the system in the study was designed to essentially toss out samples it could not identify.

Over all, the system did make mistakes: It misdiagnosed 14 cases that the humans got right. And the doctors missed 17 cases that the computer got right.

“I couldn’t have hoped for a better result,” Dr. Orringer said. “It’s exciting. It says the combination of an algorithm plus human intuition improves our ability to predict diagnosis.”

In his own practice, Dr. Orringer said that he often used the system to determine quickly whether he had removed as much of a brain tumor as possible, or should keep cutting.

“If I have six questions during an operation, I can get them answered without having six times 30 or 40 minutes,” he said. “I didn’t do this before. It’s a lot of burden to the patient to be under anesthesia for that long.”

Dr. Bederson said that he had participated in a pilot study of a system similar to the one in the study and wanted to use it, and that his hospital was considering acquiring the technology.

Categories
artificial intelligence Breast Cancer Cancer Google Inc Mammography Mozziyar Etemadi Nature (Journal) Uncategorized your-feed-healthcare

A.I. Is Learning to Read Mammograms

Artificial intelligence can help doctors do a better job of finding breast cancer on mammograms, researchers from Google and medical centers in the United States and Britain are reporting in the journal Nature.

The new system for reading mammograms, which are X-rays of the breast, is still being studied and is not yet available for widespread use. It is just one of Google’s ventures into medicine. Computers can be trained to recognize patterns and interpret images, and the company has already created algorithms to help detect lung cancers on CT scans, diagnose eye disease in people with diabetes and find cancer on microscope slides.

“This paper will help move things along quite a bit,” said Dr. Constance Lehman, director of breast imaging at the Massachusetts General Hospital in Boston, who was not involved in the study. “There are challenges to their methods. But having Google at this level is a very good thing.”

Tested on images where the diagnosis was already known, the new system performed better than radiologists. On scans from the United States, the system produced a 9.4 percent reduction in false negatives, in which a mammogram is mistakenly read as normal and a cancer is missed. It also provided a lowering of 5.7 percent in false positives, where the scan is incorrectly judged abnormal but there is no cancer.

On mammograms performed in Britain, the system also beat the radiologists, reducing false negatives by 2.7 percent and false positives by 1.2 percent.

Google paid for the study, and worked with researchers from Northwestern University in Chicago and two British medical centers, Cancer Research Imperial Centre and Royal Surrey County Hospital.

Last year, 268,600 new cases of invasive breast cancer and 41,760 deaths were expected among women in the United States, according to the American Cancer Society. Globally, there are about 2 million new cases a year, and more than half a million deaths.

About 33 million screening mammograms are performed each year in the United States. The test misses about 20 percent of breast cancers, according to the American Cancer Society, and false positives are common, resulting in women being called back for more tests, sometimes even biopsies.

Doctors have long wanted to make mammography more accurate.

“There are many radiologists who are reading mammograms who make mistakes, some well outside the acceptable margins of normal human error,” Dr. Lehman said.

To apply artificial intelligence to the task, the authors of the Nature report used mammograms from about 76,000 women in Britain and 15,000 in the United States, whose diagnoses were already known, to train computers to recognize cancer.

Then, they tested the computers on images from about 25,000 other women in Britain, and 3,000 in the United States, and compared the system’s performance with that of the radiologists who had originally read the X-rays. The mammograms had been taken in the past, so the women’s outcomes were known, and the researchers could tell whether the initial diagnoses were correct.

“We took mammograms that already happened, showed them to radiologists and asked, ‘Cancer or no?’ and then showed them to A.I., and asked, ‘Cancer, or no?’” said Dr. Mozziyar Etemadi, an author of the study from Northwestern University.

This was the test that found A.I. more accurate than the radiologists.

Unlike humans, computers do not get tired, bored or distracted toward the end of a long day of reading mammograms, Dr. Etemadi said.

In another test, the researchers pitted A.I. against six radiologists in the United States, presenting 500 mammograms to be interpreted. Over all, A.I. again outperformed the humans.

But in some instances, A.I. missed a cancer that all six radiologists found — and vice versa.

“There’s no denying that in some cases our A.I. tool totally gets it wrong and they totally get it right,” Dr. Etemadi said. “Purely from that perspective it opens up an entirely new area of inquiry and study. Why is it that they missed it? Why is it that we missed it?”

Dr. Lehman, who is also developing A.I. for mammograms, said the Nature report was strong, but she had some concerns about the methods, noting that the patients studied might not be a true reflection of the general population. A higher proportion had cancer, and the racial makeup was not specified. She also said that “reader” analyses involving a small number of radiologists — this study used six — were not always reliable.

The next step in the research is to have radiologists try using the tool as part of their routine practice in reading mammograms. New techniques that pass their initial tests with flying colors do not always perform as well out in the real world.

“We have to see what happens when radiologists have it, see if they do better,” Dr. Etemadi said.

Dr. Lehman said: “We have to be very careful. We want to make sure this is helping patients.”

She said an earlier technology, computer-aided detection, or CAD, provided a cautionary tale. Approved in 1998 by the Food and Drug Administration to help radiologists read mammograms, it came into widespread use. Some hospital administrators pressured radiologists to use it whether they liked it or not because patients could be charged extra for it, increasing profits, Dr. Lehman said. Later, several studies, including one that Dr. Lehman was part of, found that CAD did not improve the doctors’ accuracy and even made them worse.

“We can learn from the mistakes with CAD and do it better,” Dr. Lehman said, adding that A.I. has become far more powerful, and keeps improving as more data is fed in. “Using computers to enhance human performance is long overdue.”

She and Dr. Etemadi said that a potentially good use of A.I. would be to sort mammograms and flag those most in need of the radiologist’s attention. The system may also be able to identify those that are clearly negative, so they could be read quickly and patients could promptly be given a clean bill of health.

Although developers of A.I. often say it is intended to help radiologists, not replace them, Dr. Lehman predicted that eventually, computers alone will read at least some mammograms, without help from humans.

“We’re onto something,” she said. “These systems are picking up things a human might not see, and we’re right at the beginning of it.”

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]