List of questions

LabGeniusWorkshop language: English
  • 2384

    What is the best way to estimate neural network model uncertainties?

    At LabGenius we apply empirical computation methods to accelerate evolution with the mission to find radically new proteins for therapeutics. The objective of the Modelling team is to first learn the links between DNA sequences to protein function and then use these learnings to suggest improved libraries. We have proven that our models work well for the test data that is similar to the training, but are not sure of its reliability when deviating from it. Our stochastic search optimiser explores the DNA space for peaks in the protein fitness landscape while using the model a heuristic of DNA to function scores. A good understanding of the model uncertainties will enable us to both * exploit learnings from our training data * identify regions for optimal exploration (generation of new data in the lab) to find new promising regions with high protein fitness.

PuratosWorkshop language: English
  • 2204

    What are the advantages of using AI models over regression models to optimize and model highly nonlinear and complex biological processes in fermentation?.

    As a Biotech unit we manage several diversity of data from biological and metabolic response to yield after downstream process . Then a diversity of biological data depends from each other in a living model.

  • 2205

    How in an R&D biotechnology department (enzymes production) using AI and data models could reduce timing of experimental design and time to market?

    Adaption of data and AI technologies needs to be translated in business impact. Time to market reduction is one of them

  • 2206

    Could the use of Genetic Algorithm and AI speed the discovery process for new enzymes? How?

    Time to market in a company is always a pressure for R&D teams. Could the use of AI really decrease the number of experimental designs

  • 2207

    Starting from scratch in a private R&D biotech center with genetics, upstream and Downstream process. how will you start implementing a data governance and predictive models strategy?

    Setting up in a company a Data process flow and analytics, predictive models Is needed but . How to start where do they add the most value in the process?