One of the keys to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
One of the keys to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
One of the keys to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
One of the keys to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams.
Leveraging digital instruments to collect ePROs in clinical trials allows research teams to gather real-time data while also improving convenience for study participants. Incorporating clinical raters to assess ePRO submissions—such as image, video and audio files—can also decrease the subjectivity of patient-reported data points, facilitating more accurate outcomes.
Yet, even with these improved processes, subjectivity in ePRO and variability in expert ratings can compromise data quality and, ultimately, trial endpoints. Here again, innovative technology can help mitigate opportunities for human error, bringing teams closer to the quality data they need.
In addition to harnessing digital instruments and standardizing expert ratings, the third key to optimizing ePRO is deploying artificial intelligence (AI) to support human participants and clinical teams. We’ll cover how AI-enabled tools—from guided capture to auto-flagging rating discrepancies—can powerfully augment ePRO collection and assessment within decentralized and hybrid clinical trials.
Let’s start by understanding the participant perspective. Because ePRO relies on participants to capture their own data, the quality of that data depends on each person’s ability to correctly record and submit the required information. In fact, this dependency represents the Achilles’ heel of ePRO.
It’s unrealistic to expect the average participant to consistently capture and submit high-quality data on their own, especially if taking a photo or video is required. Participants need user-friendly tools and support to generate unstructured data that meets quality standards. Otherwise, they may submit ePRO with any number of issues—such as blurry images or audio recordings with ambient noise—or even accidentally expose their own protected health information (PHI).
This low-quality data is often unusable, which puts participants in the stressful position of trying to recapture data that meets quality benchmarks. These negative experiences can hamper their overall engagement and compliance with the study protocol. Bottom line: Participants need technology that takes the guesswork out of unstructured data capture and submission.
Now, let’s review common challenges facing expert clinical raters. First, being human, experts sometimes make mistakes. And, since clinical judgment and opinions can vary from person to person, clinical raters sometimes interpret the same data in different ways. For this reason, experts need systems that provide support and standardization.
Of course, low-quality submissions from participants are another hurdle for clinical raters. Manually parsing through unusable submissions is not the highest and best use of a clinical rater’s valuable time and expertise. Clinical raters need tools that not only help generate high-quality ePRO but also filter out unsatisfactory data before it reaches experts.
Artificial intelligence can help resolve these conundrums for participants, clinical raters, and entire study teams. AI unlocks the human potential of ePRO by streamlining and demystifying the ways that people capture and share data.
AI-assisted technologies can help patients record source data in real time with a couple taps on their mobile devices:
· Guided image capture directs them, telling them where to focus their camera, where to zoom in, and how to move their bodies. For example, an app can detect a participant’s face and indicate whether it’s in the right place (or how to move it to the right place).
· Automatic blur and angle detection, as well as lighting assistance, help participants capture the outcome correctly, so it can be submitted “ready to rate.”
· Image segmentation can identify and remove unwanted, irrelevant visual elements, such as PHI or distracting backgrounds, from ePRO media submissions.
· Similarly, audio exploration can detect and automatically record acoustic data within ePRO audio recordings, from coughs to infant cries. This tool enables teams to remove false positives and negatives from ambient recordings, producing more easily ratable outcomes.
Complementing the improved participant experience, AI also streamlines the expert rating process. To ensure outcome accuracy, AI can be used to prevent errors in outcome measures:
· Guided capture and other participant-friendly tools help generate quality data, saving experts’ valuable time.
· AI-based outcome metrics provide an accurate comparison point, allowing teams to identify intra-/inter-rater variability and discrepancies between the AI-based rating and the experts’ ratings.
· Technology that auto-flags rater discrepancies can support standardization in scoring.
An AI-supported platform can resend any ratings that surpass AI-based discrepancy thresholds. Passing these ratings to clinical experts for further review and explanation can improve rating objectivity and quality.
Study teams can leverage AI-assisted technologies today to improve the data collection and assessment process from start to finish. But, that’s not all. Forward-thinking teams can also leverage AI to contribute to the novel outcomes of tomorrow.
For instance, clinical notes (annotation) features allow experts to capture nuances in ratings, allowing them to detail why a rating was determined. These open-text descriptions can be used to train AI algorithms and program automatic ratings, creating a digital system that delivers ratings with the accuracy of expert raters. Optimized over time, this system will lead to better, more effective clinical rating processes.
This three-part blog series demonstrates the potential for technology to augment and improve the entire ePRO collection and assessment process. From using digital instruments and standardizing expert ratings to deploying artificial intelligence, it’s clear that teams have more tools than ever before to unlock the full potential of ePRO.
Virtual clinical trials aren’t just a buzzword—these research models are here to stay. In fact, the global market for virtual clinical trials is expected to reach $12.9 billion by 2030, according to Grand View Research.
Expectations for the time it takes to complete a vaccine clinical trial have been radically raised by the mRNA vaccines’ successes.