Google Medical Brain AI Can Predict When Patients Will Die

Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality.

Google Medical Brain AI Can Predict When Patients Will Die

The promise of digital medicine stems in part from the hope that, by digitizing health data, we might more easily leverage computer information systems to understand and improve care.

An artificial intelligence program developed by researchers at Google can predict when a hospital patient will die with up to 95% accuracy, according to a new paper published in the journal npj Digital Medicine.

According to Popular Mechanics if this type of AI is implemented in hospitals, it could help these hospitals save money and more efficiently apply their resources, increasing the number of lives they could save.

In a trial run of this new software at two U.S. hospitals, the Google AI was 95 percent and 93 percent accurate in its predictions of patient mortality, according to Popular Mechanics.

The website added that this is a dramatic improvement over traditional hospital software, which average about 85 percent accuracy. Primarily, this is due to the number of variables used by Google’s AI. The AI analyzes over 100,000 factors in order to make its predictions, compared with only a few dozen or less for most other models.

Mountain View, Ca/USA December 29, 2016: Googleplex

Mountain View, Ca/USA December 29, 2016: Googleplex – Google Headquarters with bikes on foreground (Photo Credit: www.shutterstock.com)

Google recently said that its AI applications we will not pursue, design or deploy AI in the following application areas:

  • Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” Google CEO Sundar Pichai said in the company’s blog.

 

COMMENTS

WORDPRESS: 0
DISQUS: 0