AI Can’t Save Us From This Pandemic – But It Can Certainly Help With Future Ones

Much has been said about artificial intelligence (AI) and its vast potential to revolutionize life as we know it. But if we were to pinpoint a specific field that has more promising use for AI, that definitely has to be the healthcare industry. Assisted research, drug development, early diagnostics, and advanced treatments are among the many proposed uses for AI-based tools applied to healthcare.

Thus, it’s hardly surprising that people turned to AI for help when the Coronavirus pandemic hit. And it certainly felt like AI was ready to step up. AI-powered solutions like HealthMap or BlueDot’s algorithms allegedly warned about the Coronavirus outbreak 9 days before the World Health Organization officially acknowledged it. It’s been suggested that certain machine algorithms could spot COVID-19 from CT scans. That’s not all. AI’s processing capabilities are being used to aid in the development of a Coronavirus vaccine.

So, great news, right? Well, let’s not jump into conclusions here. While all of the above are true statements about current AI, they are quickly relativized with a little context. For instance, human teams flagged the outbreak on the same day as the AI solutions – just a mere 30 minutes later. The machine algorithms that could analyze CT scans might assist with the diagnosis but wouldn’t make much of a difference in spotting the disease in its early stages. And AI will certainly speed up vaccine development – but it won’t make the process miraculously fast, as it will still take months.

Though that paragraph feels like a rush of pessimism, it holds truths that have to be said out loud to avoid falling victim to the AI hype. In other words, we have to avoid falling for the extremes. We can rely entirely on AI to save us from the current COVID-19 pandemic because the technology isn’t mature yet for it to be our sole defense against the disease. But we shouldn’t discard it as a valid option either, because we risk the possibility of developing stronger algorithms during this pandemic that could certainly work in future outbreaks.

In fact, there’s a point to be made in a realistic AI’s use in today’s crisis: through software development we can build better algorithms that will surely help us with future pandemics in at least 3 aspects. Those are prediction, diagnosis, and treatment. But we need two things. One, learn as much as we can from this pandemic to better design future AI solutions. And two, make some changes that are less technological and more social.

AI For Pandemic Prediction

The current AI-based solutions used for outbreak prediction rely on the analysis of healthcare reports and news from all over the world. By monitoring multiple sources of information, they can warn about the increase in the number of cases of controlled diseases or suspicious health patterns, ranging from well-known threats like HIV and ebola to unknown diseases like the current Coronavirus. Then, all that information is combined with travel data to predict the transit risk of people coming and going from and to infected regions.

That’s the mechanism that allowed BlueDot to warn about the COVID-19 in advance and what let Metabiota estimate the countries that will report cases of the disease after a certain time, including Italy, Iran, and the USA. Impressive as this may seem, it’s also worth noting that estimations become more inaccurate after the initial outbreak. It seems that current AI solutions are better at handling data from a single outbreak point.

It’s not entirely the AI algorithms’ fault, to be fair. For platforms like the one used by Metabiota to work better, they need access to vast amounts of reliable data, which is short in supply for Coronavirus. Today, those tools use information coming from official sources in governments around the world combined with data from news outlets and social media.

The Data Problem

The thing is – how reliable is that information? It’s not a matter of truthfulness alone. While governments might have an interest in dialing down the impact of the pandemic, even the most transparent among them can’t be sure of their accuracy, because there’s no country in the world that has tested its entire population.

Even if the AI algorithms had all that information at the most accurate level possible, it would still be impossible to reliably predict the spread, because the model would be incomplete. What’s missing from the equation? People’s habits. Since those models can’t be sure who’s washing their hands and who isn’t nor who is using face masks or keeping the proper social distance, the algorithms have to work with assumptions.

The inaccuracies in the data coming from all that introduce noise that results in errors and wrong or incomplete predictions. In other words, machine learning algorithms are as good as the data it’s fed, so we can’t rely on AI in its current state. To increase the model’s reliability, it’s imperative to access information of a high-quality, something that can be achieved by doing the following:

  • Increasing the testing to cover as many people as possible
  • Opening up the information from hospitals and healthcare institutions regarding patients
  • Sharing personal data from the public with companies and governments

The combination of those 3 things can provide machine learning algorithms with better data that could refine their analyses which, in turn, will offer more precise estimations. This, ultimately, will lead to better help in the prediction of outbreaks and their spread.

Naturally, there are many concerns with doing this. People might feel like their privacy is violated if they are forced to give up certain information (especially regarding their whereabouts, which would be tracked through GPS in the same way China is already doing ). Then there’s the veracity of the provided information. Institutions and governments might purposely hide certain figures to not feel ostracized or condemned. And then there’s the operational feasibility of getting this data together.

The last of those challenges is the most easily tackled. Using blockchain is a great way to share personal data records between the public, the institutions, and the government. Its structure turns it into a strong case for security and privacy reasons. And it’s inherent decentralization makes sure that no one has privileged access all while ensuring immediate data availability around the world.

The other (social) challenges are harder to tackle. Since AI solutions that predict pandemics need information from all people and countries, anyone failing to provide it can count as a potential error towards the final estimate. And that’s the catch – there are lots of people, groups, and governments that strongly oppose giving up sensitive information, even with a common and presumably good goal like predicting epidemics.

It’s those challenges that are keeping AI from helping more in today’s world. If we want it to be different in the future, then we need to weigh those considerations and plan ahead because having the technology in place isn’t enough: we need the social input and the necessary regulations to make it happen.

Better Diagnosis During Pandemics

If you’ve ever read an article about AI uses in healthcare, then you’ve most definitely read about how it can serve to detect all kinds of diseases, even in their early stages. By analyzing medical imaging, checking symptoms, and comparing millions of medical records, a machine learning algorithm could diagnose diseases way before a human doctor could. That’s a very promising purpose that’s been in the works for some time now.

The problem, however, remains the same as before – those solutions need high-quality data to learn what to look for. Without that previous knowledge, AI-based applications could be analyzing millions of lung images from COVID-19 patients in their early stages without knowing what the virus effects look like that early on.

That might suggest that the solution is ditching the CT scans and look for the combination of other data to come to an early diagnosis. But that poses two problems. First, getting rid of a reliable indicator such as CT scans would certainly impoverish the analysis, not the other way around. While those images might not be enough to define if someone is infected or not, it can provide additional information that might not be so obvious to the naked eye and that could be picked up by AI (especially when other symptoms confirm the virus’s existence).

And second, the combination of other data implies that we know the disease well enough to train our machine learning algorithms. But that got us circling back to the same data problem – we don’t know that much about the disease and we don’t have access to the volume of information we need to better understand it. Without that, it’s next to impossible that AI can truthfully diagnose the Coronavirus (or any unknown disease, really) in an early stage.

Not everything is lost, though. There are certain techniques to train machine learning algorithms that can certainly help when there’s little data available. AI can surely learn from a limited amount of results and can even apply approaches with proven success in other fields to health-diagnosis tasks. Doing that can take us closer to what we want AI to do in a pandemic, but we have to keep in mind that the results those AI solutions can give us aren’t conclusive nor entirely accurate.

Cure Development In a Pandemic

Now that the Coronavirus has struck, there’s a simultaneous race to improve the diagnosis and to find a cure. AI is already helping with the latter, as it is one of the uses that have been more extended in the healthcare industry in the pre-pandemic era. By employing deep learning algorithms, laboratories and researchers can analyze a vast amount of biological and molecular structures to find potential cures or, at the very least, candidates that are potential cures.

That’s precisely where we are right now, with teams all over the world being aided by AI in their race for a cure and a vaccine. Deep learning algorithms can take a look at well-known drugs as well as to novel drug candidates and check their potential interaction with the virus to assess their efficacy. That might make you think that it’s all more or less automatic and instantaneous but that’s far from being the case. Doing all that not only requires good databases and a lot of computational power, but also another resource that’s always scarce during a pandemic: time.

If you’ve been reading the news, you’ve certainly come across all kinds of promises regarding vaccines and cures for the COVID-19, ranging from those claiming that we’ll have one by the end of the year to those that say that it could take years. Without taking anyone’s side, we can point out that even all of them agree that it will take months to develop a cure – even with AI’s help.

Another thing we could be using AI is to predict future scenarios where the virus mutates and develops new traits (increased resistance or new contagion abilities, for instance). This could allow us to prepare for those dire situations. The problem is, once again, we don’t have enough information about the mutation process to anticipate realistic scenarios.

So, in terms of drug development, AI is rather a powerful assistant that can point directions and help with repetitive tasks but that isn’t powerful enough yet to be tasked with finding a cure. In the future, with a better understanding of this pandemic, AI can have a real-case scenario to base assumptions on, which could lead to more accurate predictions. That, of course, will happen if the current data is properly gathered, cleaned, and classified for future use.

A Few Last Words

If there’s something to take out of all this is that AI isn’t the savior some might think it is – at least not in the current state of affairs. That isn’t to say that AI is useless. It certainly is helping in today’s battle and the experience collected here can definitely help in the development of more sophisticated machine and deep learning algorithms that can aid us in the fight against future pandemics.

We have to keep in mind that there’s a caveat to that, though. The only way we can unleash the full power of AI is by having reliable and complete data about the phenomena we want to study. In a pandemic’s case, that means accessing patient data from around the world, something that largely depends on governments and institutions but that ultimately falls on the public’s laps.

The decisions we make around that subject and the debates that come out of this pandemic will be key for the future of AI in healthcare. It won’t be easy to tackle those challenges but it’s highly important we do so to prepare ourselves for future pandemics that will sooner or later happen again.

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail

Leave a Reply