Enter Here
Digital instruments such as sensors and wearables are transforming the industry’s definition of accuracy in three measures: volume, practicality, and precision.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Digital instruments such as sensors and wearables are transforming the industry’s definition of accuracy in three measures: volume, practicality, and precision.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Digital instruments such as sensors and wearables are transforming the industry’s definition of accuracy in three measures: volume, practicality, and precision.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Digital instruments such as sensors and wearables are transforming the industry’s definition of accuracy in three measures: volume, practicality, and precision.
Healthcare’s digital revolution is reshaping medical research, diagnostics, and therapeutics. One prime example: Digital health technologies (DHTs) such as sensors, wearables, and digital biomarkers which are gaining widespread adoption among patients, providers, and clinical researchers. In fact, forward-thinking sponsors are turning to sensors and wearables to capture richer, better data in clinical trials across therapeutic areas and sectors.
The life sciences industry’s embrace of sensors and wearables underscores that these tools aren’t a flash in the pan. On the contrary: Digital instruments such as sensors and wearables are rapidly transforming the industry’s definition of accuracy across three key measures: volume, practicality, and precision. With the added benefit of AI and machine learning, these tools could even allow sponsors to develop novel digital biomarkers, with profound implications for scientific discovery.
In this article, we’ll take a deep dive into sensors, wearables, and digital biomarkers within clinical research. We’ll propose a new paradigm of accuracy, explore how these tools are shaping the future of the industry, and discuss how to assess these devices for potential inclusion in a clinical trial.
Sensors, wearables, and digital biomarkers are interconnected, overlapping concepts with some key distinctions. In order to build a clear framework for future discussion, we will provide a definition of each.
Medical sensors are small electronic devices that capture data from a patient’s body in real time. Subcategories of sensors include wearables, portables, and digestibles. In other words, patients can carry, wear, or ingest sensors.
The sensor’s location on or inside the body varies, depending on what is being measured. For instance:
Sensors can track a wide range of health data while the body is active or resting, including:
After sensors collect data, they transmit it to a connected device via wireless technology. Research teams can then analyze and interpret data submissions from study participants via a central dashboard.
Sensors are exceptionally valuable to clinical researchers because they allow study teams to generate and monitor continuous and/or intermittent data. While traditional or electronic patient-reported outcomes provide subjective data snapshots, sensors can provide a steady stream of objective data. And, by generating larger volumes of real-time data, sensors serve as an effective alternative to other forms of outcome reporting. They can be especially useful for trials within disease states and therapeutic areas that benefit from a larger body of data.
There is one subcategory of sensors in particular—wearable sensors, or wearables—that is gaining significant traction in clinical research.
Wearables offer continuous monitoring and data reporting through devices that patients physically wear on their bodies. Sensors can be integrated into a wide range of wearable objects, including “smart” shirts, vests, watches, glasses, or socks.
Some researchers group sensors such as skin patches and intra-body devices in the wearables category. However, a strict definition of wearables includes sensors that a patient can easily put on and take off (“wear”) as needed according to the purposes of a study.
It’s important to note the differences between wearable sensors and consumer wearables. Some consumer-grade activity trackers may not deliver the accuracy needed for clinical research.
Wearable sensors, however, deliver constant, precise, real-time physiological and behavioral data. When appropriately included in a study protocol, these medical sensors can supply rich and robust information as either a complement to or a replacement for certain ePRO components.
In developing wearables, device manufacturers often focus on optimizing the form factor (physical characteristics) of a device, which improves the ease of use and overall participant experience in a trial. When wearables are not optimized for trials, compliance can suffer.
Despite these challenges, there is broad consensus that accessible tools like sensors and wearables have the potential to unlock new ways of capturing and interpreting patient data—with significant implications for clinical research.
Digital biomarkers represent another important area of advancement in clinical trial measurement. As the industry continues to debate the precise definition of "digital biomarkers," the FDA has stepped in to issue guidance and elaborate on the topic in a Nature article:
“FDA defines a digital biomarker to be a characteristic or set of characteristics, collected from digital health technologies, that is measured as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions.”
Building on its definition of digital biomarkers, the FDA shared several examples from published literature:
As we can see from the FDA’s definition and examples of digital biomarkers, a clear path can be traced between sensor-enabled data collection and digital biomarker use in clinical trials. Sensors and wearables collect data on physiological characteristics (digital biomarkers) that can be used to evaluate the efficacy and/or safety of an intervention.
For instance, study teams are digitally measuring changes in gait, speech, and loss of or slowed movements to indicate the presence of Parkinson's disease or other nervous system disorders. Other digital measurements associated with changes in perceptual-motor coordination, cognitive processing speed, prospective memory, spatial memory, gait, and inhibition are increasingly being considered as possible indicators of the presence of dementia.
To better understand the nuances of digital biomarkers, we can divide them into two subcategories: passive and active.
Biomarkers and their application are not novel: “Legacy” biomarkers have served as an integral component of clinical research and practice for many years. However, digital biomarkers differ from legacy biomarkers in that they harness technology to gather and apply objective data in multiple medical applications.
A simple example can be seen in blood pressure measurement. Standard blood pressure readings taken by a clinician with a manual sphygmomanometer represent a legacy biomarker, while blood pressure obtained through a remote sensor can be considered a digital biomarker. In a recent meta-analysis of the diagnostic accuracy of mercurial versus digital blood pressure measurement devices published in Nature, a broad range of digital blood pressure biomarkers for at-home use were shown to have moderate accuracy and to provide accurate information on blood pressure with which diagnostic and treatment decisions could be made. According to the Nature article, access to digital blood pressure devices “changes the quality of detection of hypertension and management and thus contributes to early diagnosis and prevention."
Sensors, wearables, and digital biomarkers can all be grouped under the term digital instruments. Digital instruments are defined as sensors and wearables that collect and quantify measurable patient data—enabling greater accessibility and greater accuracy. Using digital instruments, researchers are able to capture clinical data from patients in a way that is potentially transformative for clinical research.
When deployed in decentralized and hybrid clinical trials, digital instruments enable the participation of populations who have been underrepresented in clinical research. Digital instruments can be shipped directly to patients, allowing measurements to be conducted at home and breaking down traditional barriers such as geographic distance from a study site.
For instance, fibromyalgia and chronic fatigue syndrome limit a patient’s mobility and energy and, in turn, their ability to visit a clinical trial site. As a result, these and other therapeutic areas remain elusive research categories. Digital instruments serve to bridge such gaps, expanding access to populations traditionally excluded from clinical studies and mitigating stigmas associated with some medical conditions—all of which serve to open new frontiers for research.
Researchers in Pharmacological Reviews note the benefits of digital tools, including “increasing participation rates and enabling trials to be conducted in vulnerable populations with chronic diseases, such as the elderly, psychiatric patients, and children.” They add: “These patient groups have traditionally been neglected in clinical research because of a lack of mobility, additional ethical barriers, and low recruitment rates.”
The benefits of digital instruments extend beyond recruitment and into research authenticity and real-world practicality. These tools “allow study teams to measure the effects of an intervention in a patient’s natural environment, increasing a study’s ecological or ‘real world’ validity,” the authors note. “The objective nature of these measurements can lead to higher sensitivity and objectivity, compared with clinical rating scales. Wearable technology also offers high-frequency and situation-relevant measurements, moving away from the artificially contrived intervals used in clinical trials."
Increased accuracy is an ever-present objective for sponsors. Digital instrumentation is primed to help them achieve it, allowing for more frequent and ongoing monitoring in a real-life setting with increased precision—all factors that drive accuracy. The volume of data gathered through digital instruments provides results that can outperform traditional endpoints.
For example, a patient who checks their blood sugar manually four times daily receives four individual readings. But, a continuous glucose monitor provides clinicians with “big picture” insights—revealing when and whether glucose levels bottom out overnight and when exactly they are spiking.
This example demonstrates how digital instruments can transform medical assessment from snapshots to continuous or intermittent real-life tracking of data in a patient’s normal environment. These data points can complement technologies such as electronic patient-reported outcomes (ePRO) and electronic clinician-reported outcomes (eClinRO).
The dual benefit of digital instruments lies in their ability to unlock richer, more precise assessments that are also easier for participants. In a traditional clinical trial, comprehensive assessments are burdensome and therefore limited in frequency, so “they only provide snapshots of treatment efficacy,” according to an article in the Journal of Medical Internet Research.
“However, symptoms can fluctuate from week to week, day to day, and even within a single day,” the authors note. They add that in a traditional trial, a patient’s symptoms might improve or worsen by chance because of “factors unrelated to treatment efficacy.” Yet, without real-world data, it’s difficult to understand what’s causing those changes. Additionally, when a measurement is too infrequent, study teams run the risk of long time lapses prior to the detection of adverse or serious adverse events.
Continuous and intermittent real-life tracking from digital instruments is the solution to creating meaningful digital measures that represent real-life scenarios. These instruments deliver a cohesive feedback loop to support the early detection of adverse events and prevent health risks and costly delays.
Pharmaceutical and medical device sponsors are particularly interested in digital instruments’ ability to reduce the “noise” surrounding relevant clinical signals. For example, in assessments of cognition, the signal is the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention. Noise refers to the effects of external factors that can influence these measures. Improving the signal-to-noise ratio by eliminating as many external variables as possible helps to ensure that sponsors receive accurate data specific to the therapeutic manipulation in question.
When it comes to traditional ePRO reporting, biases, environmental factors, and inconsistencies in measurement negatively impact accuracy. Training of participants can significantly reduce this ratio. But, digital instruments can go a step further, facilitating—according to Dr. Nathan Cashdollar, Director of Digital Neuroscience at Cambridge Cognition—the assessment of “people with psychiatric and neurological disorders daily, and remotely, without the supervision of a healthcare professional, thus providing an improved signal-to-noise ratio, [which enables] a more sensitive metric of successful therapeutic interventions."
Realizing the benefits of digital instruments requires an intentional approach. In order for these innovations to be practically applied in clinical research—and, ultimately, generate more accurate and meaningful data—sponsors must first identify the right signal and then validate prospective digital instrumentation tools based on their ability to generate that signal.
The promise of digital instruments—and of the real-time, real-life signal capture they facilitate—to transform clinical research is clear. But, how should sponsors leverage these powerful capabilities to demonstrate efficacy and bring treatments to market more quickly and cost-effectively?
The starting point for any successful trial is the establishment of the right study endpoints. As advances in digital instruments and biomarkers evolve the way we measure outcomes, the clinical research industry has begun to think about the implications for trial endpoints. Too often the protocols in decentralized clinical trials adopt and perpetuate suboptimal outcomes assessments. The opportunity exists to leverage more nuanced digital measurements or outcomes to achieve stronger and increasingly digital endpoints.
Digital endpoints allow study teams to use digital instruments to monitor patients in their natural, real-world environment. These tools ultimately provide a more accurate assessment of the patient’s lived experience, including granular data that was previously undiscoverable.
In order to maximize the relevancy and accuracy of this data, sponsors must first determine which outcomes to measure in the clinical trial—one of the most critical study decisions—and then choose digital instruments in the earliest stages of study design to facilitate the optimal delivery of those outcomes.
In this process, study teams should consider the natural environments in which their endpoints arise. According to an article in Digital Biomarkers journal, successful digital endpoint and digital biomarker selection require intense interdisciplinary collaboration and “the development of an ecosystem in which the vast quantities of data those digital endpoints generate can be analyzed.”
The “vast quantities of data” these digital instruments gather should not be underestimated. Monitoring a patient continuously for several weeks generates gigabytes of information. A virtual research organization can partner with sponsors to streamline the process and determine how much of that data needs to be stored and analyzed.
As demonstrated, the validation of digital endpoints for clinical research studies is essential, and it requires the identification of the right digital instruments and outcomes from the outset.
When choosing a digital instrument for a clinical trial, it’s tempting to want to use all of the bells and whistles of a new device that promises to measure everything under the sun. However, more often than not, simple solutions are best. In a trial for an antihypertensive drug, for example, researchers might consider using a smartwatch to monitor pulse, activity, sleep habits, and skin temperature. But, if the study’s primary endpoint is reduced blood pressure following treatment, the team might choose a digital blood pressure cuff with fewer but better-quality signals.
After identifying the right digital endpoint for a study, sponsors must undertake three steps to select the correct digital instrumentation for their clinical research. Digital instruments for clinical trials must succeed through all three steps—technical, clinical, and practical validation—to ensure accurate data collection and a successful trial.
Technical validation of a digital instrument must precede any of the other, more “hands-on” steps of validation. The goal in this stage is to assess whether an instrument is fit for use in the trial.
Technical validation determines how usable, reliable, and reproducible the technology is. The device must meet minimum technological standards used by the healthcare industry with an automated flow of data, requiring minimal manual aggregation and manipulation by expert raters and trial teams.
Once again, reducing the participant burden and providing a good user experience are key. Pharmacological Reviews notes: "In this phase of validation, it is also advised to consider the amount of training and instruction that will be necessary to ensure measurements are conducted correctly by patients." Susan Dallabrida, CEO of SPRIM—experts in DCT protocol development and optimization adds: "In our experience, short training modules can significantly improve the accuracy of outcomes. When patients are clear about how and when to use a device, so many issues can be avoided.”
In addition to complying with regulations set forth by agencies such as the FDA, digital instruments should offer minimal inter-device and intra-device variability. This helps to facilitate the collection of the most accurate data possible in a clinical trial. The device should also offer a degree of privacy to patients using encryption.
Following technical validation is clinical validation, also known as medical validation. In this phase, study teams determine the instrument’s value to the trial—and its suitability to the study’s scientific parameters. The importance of clinical validation to data quality cannot be overstated.
To optimize this process, an article in Pharmacological Review suggests several important factors to take into consideration during the clinical validation phase to appraise:
Once a device has passed through technical and clinical validation, it should then be assessed for participant validation. This phase ensures the device will be tolerable and usable for trial participants.
Patient technology training is necessary for every clinical trial that utilizes a digital instrument; some patients will naturally be more technologically savvy than others. But, even for the most technology-apt individual, a difficult user interface will decrease participation, introducing the risk of non-compliance and participant drop-off. Like all phases of validation, the participants’ adoption is essential. Additionally, it is important that users have a common base of understanding of the way they should use the instrument.
In a Digital Biomarkers article, the authors explain: “Patient engagement, early and often, is paramount to thoughtfully selecting what is most important. Without patient-focused measurement, stakeholders risk entrenching digital versions of poor traditional assessments and proliferating low-value tools that are ineffective, burdensome, and reduce both quality and efficiency in clinical research and care.”
These processes, as well as the digital instruments themselves, are transforming the way clinical trials are conducted today. But, the future of digital instruments is even more promising.
In the years ahead, digital instrumentation will continue to enhance data capture and analysis in clinical trials. Digital instruments that leverage artificial intelligence (AI) and machine learning will extend the possibilities further—allowing study teams to reach previously unattainable digital endpoints.
Artificial intelligence (AI) is driving innovation across many industries, and clinical research is no different. AI allows clinical trial teams to use technology to perform tasks that normally require human intelligence. For instance, automated searches of participant data can generate insights to enhance health outcomes and patient experiences.
Sponsors can leverage these sophisticated models to make sense of the vast volumes of data enabled by continuous and intermittent real-life monitoring. Machine learning, deep learning, computer vision, and signal processing all offer the potential for more precise, user-centric assessments and analysis.
Machine learning (ML) is a “branch of AI and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy,” according to IBM. Computer systems that use machine learning infer meaning from patterns in data to get “smarter” over time.
Researchers explain in Trials: “Machine learning has the potential to help improve the success, generalizability, patient-centeredness, and efficiency of clinical trials. Various ML approaches are available for managing large and heterogeneous sources of data, identifying intricate and occult patterns, and predicting complex outcomes. As a result, ML has value to add across the spectrum of clinical trials, from preclinical drug discovery to pre-trial planning through study execution to data management and analysis.”
Deep learning is a type of machine learning and AI that mimics the way humans gain some types of knowledge. The indicator “deep” comes from the many layers of complexity the system uses to make predictions, learn correct information, and build a stronger understanding. The output of deep learning is a statistical model that becomes more accurate over time.
Computer vision trains machines to capture and interpret visual information using machine learning models.
With computer vision, a clinical research app can analyze image and video data to classify objects, gather data, identify patterns, and flag errors in much less time than it would take a human to do so. Computer vision also allows systems to respond to human interaction—for instance, using facial recognition to unlock a smartphone.
“Computer vision can improve both speed and accuracy when analyzing medical imaging: recognizing hidden patterns and making diagnoses with fewer errors than human professionals,” according to HIMSS. More accurate, efficient imaging analysis will also support the continued development of tools such as augmented ePRO.
Researchers often collect signal data such as sound, images, and biological indicators such as ECG. However, distortions and background “noise”—the effects of external biases that can influence signals—can make high-quality data hard to gather.
Signal processing can address this 'signal-to-noise ratio' problem. This area of electrical engineering models and analyzes data representations of physical events. Backed by a deep conceptual understanding of a large data pool, signal processing can lessen the effects of potential noise. Applications for signal processing include the use of vocal biomarkers for conditions as varied as Parkinson’s Disease, Alzheimer’s Disease, multiple sclerosis, and rheumatoid arthritis.
Improving the signal-to-noise ratio using the rapidly evolving capabilities of AI will allow sponsors to achieve even greater accuracy with digital instrumentation. And layering these capabilities on top of unprecedented volumes of collected data will reveal new relationships between symptoms and disease states—leading to novel endpoints that can advance medicine beyond what was envisioned a decade ago.
The conclusion is clear: Sensors, wearables, and digital biomarkers are enabling greater clinical trial accuracy with reduced patient burden. These tools advance clinical research across therapeutic areas, deliver precise tracking to generate richer data and mitigate risk, and capture meaningful signals in real-time. And the ongoing innovation in AI-enabled technologies ensures that digital instruments will continue to deliver increasingly powerful therapeutic insights in the years to come.
More sponsors than ever are embracing clinical research through digital endpoints backed by sophisticated, user-centric tools. To take advantage of the wearables, sensors, and digital biomarkers revolution, sponsors must adopt a strategic approach, thoughtfully selecting and integrating digital instruments when appropriate, to achieve precise data that correlates to their specific clinical endpoints—and unlocks a new paradigm of accuracy.
Virtual clinical trials aren’t just a buzzword—these research models are here to stay. In fact, the global market for virtual clinical trials is expected to reach $12.9 billion by 2030, according to Grand View Research.
Expectations for the time it takes to complete a vaccine clinical trial have been radically raised by the mRNA vaccines’ successes.