MedyMatch Technology Ltd., a deep learning and artificial intelligence company, announced Tuesday that it has achieved a major milestone in the availability, for research, of an artificial intelligence based technology that can help detect the presence of intracranial hemorrhage or brain bleed which can occur in cases of brain trauma and stroke. Subtle bleeds are difficult to detect and if misinterpreted or missed by a physician can lead to serious patient injury and even death.
It is anticipated that this new technology will come to the marketplace in several forms: a patient specific computer assisted detection (CAD) tool used by physicians in the emergency room to assist in the detection of intracranial bleeds; a prioritization algorithm operating within a PACS or on a CT to help prioritize cases based on the potential presence of a bleed; and as a tool to provide insights into populations to proactively identify bleed cases.
MedyMatch utilizes advanced cognitive analytics and artificial intelligence to deliver real-time decision support tools to improve clinical outcomes in acute medical scenarios. The foundation of clinical discovery and value creation lies in the deep clinical understanding of how to utilize the right data (electronic medical record, medical imaging, and genomic data).
The MedyMatch team of artificial intelligence, machine learning, deep learning algorithms experts along with its medical and science advisory boards are achieving breakthroughs in standards of cost and care.
Deployment will be customer driven as either a cloud or on-premises based solution, with near zero foot-print and seamless integration into a hospital enterprise, integrating smoothly into clinical workflow.
Per the American Heart Association and American Stroke Association (AHA/ASA), stroke is the fourth leading cause of death and one of the top causes of preventable disability in the United States. Affecting 4 percent of U.S. adults, it is forecasted that by 2030 there will be approximately 3.4 million stroke victims annually in the U.S., costing the healthcare system $240 billion on an annual basis.
“The generalized 3D deep vision platform approach has the promise to tackle many diseases. We have developed the capability to consider the full richness of medical imaging along with any other patient data,” said Gene Saragnese, chairman & CEO of MedyMatch. “Our platform and A.I. approach will facilitate rapid decision support development, clinical discovery and propel MedyMatch into adjacent decision support opportunities.”
It is envisioned that MedyMatch’s technology will assist physicians and not replace them, essentially providing a virtual “second set of eyes” to help physicians assess radiology images as accurately as possible.
“Consideration of the whole patient differentiates MedyMatch from traditional CAD applications,” said Dr. Jacob Cohen, Chief Technical Officer of MedyMatch. “While traditional CADs strictly focus on pixel data, MedyMatch’s technology applies deep learning and computer vision (Deep Vision) to interpret the full richness of the 3D imaging data together with the patient’s Electronic Medical Record (EMR), allowing the system to consider the ‘whole’ patient.”
MedyMatch’s goal is to harness clinical understanding in conjunction with computer vision and deep learning to provide real time artificial intelligence-based clinical decision support to physicians in the emergency room. To accomplish this, MedyMatch considers all the multidimensional patient data; raw imaging concurrently with other relevant patient data at the leading edge of machine learning technologies.
In June, MedyMatch partnered with Capital Health, the first of several partnerships with hospitals in the United States intended to improve stroke patient outcomes. As part of the agreement, Capital Health will provide anonymized data to MedyMatch for use in the development of its first decision support tool, directed towards stroke patients.
Additionally, MedyMatch will leverage medical imaging libraries across multiple imaging modalities including CT, X-ray, MRI, Ultrasound and PET, which will be utilized as part of its research and development efforts to train its next set of applications and deep learning algorithms.