AI is well-suited for crunching large databases and looking for patterns due to its ability to process large volumes of data quickly and to detect patterns, trends and relationships within data, including intricate, high-dimensional and complex ones that are difficult for humans or traditional methods to detect. In addition, AI is scalable and adaptable, capable of handling diverse data formats and delivers consistent and reliable results. This allows AI to sift through the large amounts of data needed for accurate PTRS assessments quickly and efficiently and provide valuable insights that can inform decision-making processes in drug development.

Pharmaceutical companies utilize several different traditional approaches to inform their decision-making. The most frequently used ones are:

  • Heuristic – Based on an individual’s experience and knowledge related to the domain
  • Historical Benchmark – Based on a non-weighted analysis of prior assets developed in the area of interest
  • Statistical Analysis – Based on (weighted) regression analysis of single or multivariate factors

AI solves many of the challenges that plague existing ways of assessing the PTRS of a drug candidate. AI can crunch vast amounts of data and detect patterns that neither humans nor existing traditional statistical approaches can detect. If trained properly, AI is objective and unbiased and performs risk assessments consistently, reliably and in a standardized fashion.

The Intelligencia AI product suite is a powerful tool for professionals in portfolio planning, business development and licensing, program and product strategy and long-term planning. Our customers are mid-sized to large pharmaceutical companies. Additionally, venture capitalists, hedge fund managers and other investment professionals benefit from better assessing a drug’s chances of success.

We have worked with a highly experienced, interdisciplinary team for several years to develop AI models and build the solid data foundation required to train these models. We update the database daily and complement the solution with value-added features such as AI explainability. It is much easier, faster and cost-effective to access the solution we have built using 10,000s of expert-hours, which has proven reliable and valuable in the hands of existing customers.

Carnegie Mellon University shared an excellent description of explainable artificial intelligence (XAI): Explainable artificial intelligence (XAI) is a powerful tool for answering critical How? and Why? questions about AI systems and can address rising ethical and legal concerns. As a result, AI researchers have identified XAI as a necessary feature of trustworthy AI, and explainability has experienced a recent surge in attention.

Within Portfolio Optimizer, our Explainable AI feature visually shows the drivers behind the AI-driven PTRS prediction. Therefore, AI explainability helps eliminate the impression of a mysterious AI black box and gives you confidence in our PTRS assessments. Our AI explainability blog covers this more.

We curate and harmonize over 1.5B data points that capture clinical and biological information. Data we collect include science-drug biology, indications, regulatory designations, patents/funding, genes, proteins, targets, gene expression, biological pathways, clinical trial design, setup, execution and outcomes, to name but a handful. Data collection is supported by a comprehensive data strategy, automation and technology, and expert data curation to create a centralized data repository. We describe our data as AI-ready data. If you have specific questions about our process and the type of data we capture, let’s talk

A prospective assessment involves monitoring the predictions of the model on trials that are still ongoing at the time of training and assessing them once the trials have concluded. Prospective assessments are a safer option because they ensure that only information available at the time of prediction is used thus avoiding introduction of a bias and that the assessment metrics can be verified by an external observer.

The accuracy of an AI model can be measured during the model training process; this is called retrospective assessment. It involves setting aside a subset of data from historical trials and not using them to train the model. In the next step, these unseen data are used to assess how good the model’s predictions are. This method can provide a good estimate of the model’s performance but has some drawbacks, especially related to bias.

We use machine learning (ML) algorithms mainly for our work. ML is a subset of AI that focuses on developing algorithms and statistical models that enable systems to learn from data and improve their performance on specific tasks over time without being explicitly programmed. ML is particularly suited for crunching large amounts of data and finding hidden patterns.