From Guidelines to Standards: Why Comprehensive AI Regulation is Essential to Spurring Innovation


Gordon Kessler, JD, MBA- General Counsel, Chief Administrative Officer at AiCure

Artificial intelligence continues to become a bigger part of our daily lives, but as its development and influence grow, so do concerns about how it should be regulated. The FDA has traditionally reviewed chemical molecules or hardware medical devices that don’t change their behavior when released out into the real world and are not updated over time. Conversely, AI is not a static tool locked in factory settings – its ever-changing nature not only makes proper and uniform governance difficult but also makes oversight of the ever-improving algorithms overly burdensome for both the agency and entities deploying these algorithms. Particularly in industries such as healthcare and pharma where biased algorithms and inaccurate results may have such high stakes, additional guidance and oversight into algorithm development are sorely needed to address AI bias and user mistrust. In order to unlock the potential of AI, it will be vital for stakeholders across industries and levels of government to come together to create comprehensive regulations that achieve the balancing act of democratizing the data used to train AI while protecting privacy and to standardize the process in which algorithms are developed without stunting innovation.

Regulators and AI developers alike recognize this need and have made recent strides to establish guardrails to help advance the safety and quality of AI. There are three main ways AI regulations have evolved and must continue to evolve, to meet the challenges that exist for developers and users today:

1. Provide greater transparency into how algorithms are built, trained, and validated

2. Ease the process to update or retrain algorithms as AI advances or proves to exhibit bias

3. Define universal parameters of data privacy protections and data sharing in clinical research to fuel AI advancements

From AI Transparency to User Trust

It’s no secret that there is a deficit in the general public’s trust in AI. A recent World Economic Forum survey found that only 50% of consumers trust companies that use AI as much as they trust other companies. Part of this mistrust stems from the lack of visibility into how an algorithm comes to its conclusions, and how exactly it uses data to produce its output. As AI development in healthcare grows it is essential that the guidelines on its development and use keep pace and build trust in AI solutions. Similar to how the FDA requires that detailed ingredients be displayed on the side of cereal boxes to help consumers make informed health decisions, the same transparency is needed in AI technology so patients and physicians can make educated care decisions. Rather than nutrition information, AI guidance should help verify that a model was developed with high-quality data representative of its intended patient population, as well as disclaimers on its use.

There is promising news that the federal government clearly recognizes this need. In 2021 the FDA issued Good Machine Learning Practices (GMLP) for medical device development, which outlines ten guiding principles to inform the development of AI. While these guidelines provide a good foundation, enforcing science-backed, ethical practices in AI innovation will take more than a good-faith agreement on what developers should do. We must move from recommendations to standards governing the back-end work in algorithm development that we are all held to, as well as requirements for transparency into how the algorithm was created and proof that the developer adhered to these guidelines.

Adapting Regulation to Account for the Nature of AI

While developing a strong foundation for an algorithm is an important first step, AI should be constantly learning and improving, and thus always providing the most up-to-date solution to all users. So, how does the FDA adequately regulate technology that could change months later? Imagine if every time your iPhone needed a new iOS update, it had to go through a regulatory review process that could take up to three years. Similarly, the FDA realizes that the process to review and approve new AI devices does not move quickly enough for the way these products function, leading to the use of outdated and potentially unsafe algorithms and stymying innovation. In order to keep up with the evolving nature of AI, the FDA has recently issued draft guidance allowing AI devices to be deployed using the most updated versions of their algorithms so long as developers submit a Predetermined Change Control Plan (PCCP) that outlines how they expect an algorithm to change over time and as more comprehensive training data is employed. As long as the outputs from an updated product meet agreed-upon pre-defined criteria, developers can submit a short PCCP at the time of their product submission rather than a full new device application each time there is an update.

This guidance would have a particular impact on algorithms that may have initially been trained with smaller or unintentionally biased algorithms, enabling developers to retrain AI to improve performance across minority groups without needing to seek additional regulatory approval. AI is only as strong as the data it is fed, and the data that is frequently used in algorithm development may be biased against certain minority populations if they are underrepresented in clinical research, the source of much of the training data. It is possible to start off with a great algorithm that works perfectly well with one population, but if it’s not trained with diverse data sets, it will inevitably end up not performing well once applied to other populations. For instance, if one was developing an algorithm to identify cancerous tumors, and the algorithm was only trained using data for stage 4 tumors, it may not be able to detect stage 1 tumors with similar accuracy. A similar example is with skin tone: an algorithm that is only trained with data from patients with lighter skin may struggle with data from patients of a darker complexion.

This draft guidance takes a critical step in addressing bias in AI devices, but additional federal regulations are needed to ensure clinical research datasets are representative of the broader population from the beginning and to make certain this sensitive data can be accessed for AI development. While the availability of diverse data such as race, gender, and age is essential for training unbiased algorithms, some state privacy laws, like the California Privacy Rights Act, may limit the ability to collect or retain this type of information in the first place. It will be essential for future federal AI regulations to find a balance between addressing bias in algorithm development and protecting patient privacy around the capture of their sensitive demographic data. As AI tools become increasingly integral to how we research drugs and deliver new treatments, we should emphasize inclusivity in the devices our patients and pharmaceutical companies use, and the laws governing their development, to help them reach their potential and make healthcare a more equitable industry.

Defining Privacy Parameters to Advance Data Quality

Patient privacy is an essential part of any clinical trial’s success: patients need to be aware of and feel comfortable with the procedures in place to keep their sensitive health information safe. As protecting patients’ privacy is immensely important, many governments across states and countries have varying ideas on how to do it best. This has led to a patchwork of regulations that can not only slow down the pace of research but ultimately erode patient trust and participation in clinical trials. The lack of consistency in privacy regulations also means that critical data that could be used to better train AI algorithms remains siloed and inaccessible.

Across the globe, governments, technology, and healthcare organizations must join forces to support the development of streamlined international privacy regulations. This way, patients can feel more confident about sharing their vital data in clinical research, which would empower drug companies to create therapies that more precisely meet the patient population’s needs. Once suitable ground rules have been set, it will be much easier for data to flow seamlessly across organizations and borders. With global guidelines in place outlining the proper use of de-identification of patient data, personal health insights from prior research can be used to support and strengthen future clinical studies and AI development. In addition, it is also important that future regulations permit researchers to use diverse clinical data without the risk of trial delays or worse, for sponsors’ protocols to be rejected because they used data that is outside the parameters of a particular trial.

A new data privacy framework program between the United States and the European Union hopes to ease some of these challenges. An unprecedented adequacy decision, the Data Privacy Framework (DPF) program aims to provide organizations with reliable means of transferring personal data between the United States and the European Union while guaranteeing data protection across all participating countries. Prior to this decision, data transfer between the US and EU was all but shut off, forcing researchers to enter into several transactional clauses before sharing data in a new jurisdiction. Now, the governments of the US and EU have acknowledged that the ability to share data is critical and that it is important to strike the right balance between protecting citizens and providing data access in a responsible way to spur innovation. With this program in place, researchers will be able to conduct clinical trials across international lines much more easily, leading to studies that are more representative of the intended patient population and enabling them to spend less time sifting through international privacy laws to make sure they are compliant, and more time on advancing medical breakthroughs.

The biggest benefit of this program is the effect it will have on data quality. A wider population being able to take part in clinical research will lead to more diverse data sets, which will, in turn, give algorithms a much better pool of data to be trained with, so that AI-powered solutions will be able to get it right the first time and have a broad, equitable impact on all patients, regardless of their background.

Regulating the Future: Setting Responsible Innovation Standards

The Data Privacy Framework program is a global accomplishment in data sharing that will have far-reaching impacts on clinical research and AI development across the world, and back in the United States, we are taking positive strides in defining what the responsible development of AI technology looks like. Combined with future agreements to allow the use of clinical trial data in training sets for AI, the ability to collect, transfer, and use this trove of data has the potential to support significant progress in allowing AI to improve healthcare around the globe.

There has been some recent momentum around the federal government’s and big tech’s commitments to thoroughly vet and validate algorithms. Seven executives from the country’s leading tech companies recently met with President Biden at the White House to announce their agreement on several commitments regarding the sharing, testing, and development of new technology. While these voluntary agreements are a far cry from true AI legislation, they take an important first step in defining standards in AI development and data sharing that future regulations can be used to build from. It is vital that future AI legislation decision-making involves the voices of all stakeholders, not just big tech companies. Smaller companies who have been developing AI for decades have valuable key learnings to share – not only should they have a seat at the table, but they should be critical drivers of this evolving regulatory conversation.

The potential of AI technology in healthcare has been discussed for years, and we have now reached a tipping point. Implementing all of the regulations AI needs to enhance data quality and enable ethical and impactful innovation will be an evolving process. In order to truly deliver on the promise of AI-powered technology in clinical research and care, we must move beyond best practices and enforce regulations every organization and developer must follow with clear parameters governing responsible development, data sharing, and transparency into how algorithms were created and how personal data will be used. Only then will we be able to unlock the true power of data and witness the full capabilities of AI to create a more equitable, healthier, and safer world.

Subscribe to our e-Newsletters
Stay up to date with the latest news, articles, and events. Plus, get special
offers from American Pharmaceutical Review delivered to your inbox!
Sign up now!

  • <<
  • >>

Join the Discussion