In the ever-evolving field of life sciences, the emergence of Large Language Models (LLMs) has brought about a revolutionary shift in how data can be processed, analyzed, and harnessed. LLMs, powered by state-of-the-art natural language processing (NLP) techniques, are able to generate and manipulate vast amounts of scientific text and data. By leveraging their immense language understanding and generation abilities, LLMs have the potential to accelerate discoveries, aid in drug development, facilitate knowledge extraction from scientific literature, and revolutionize the way scientists approach complex biological and medical problems.
This blog post sets the stage for exploring the challenges and the impact of LLMs in life sciences. However, for a more comprehensive explanation on how these advancements of data, machine learning, and AI needs to be handled you can check our previous blog post for an Introduction to Reliability & Collaboration for Data & ML Lifecycles and embark on the Next Generation Data & AI Journey powered by MLOps.
Transcending Boundaries: How a Biotech Transformed Drug Discovery with LLMs
Many companies nowadays want to integrate their own Large Language Models in their solutions, but only a small fraction of them has a coherent strategy over this challenge. Let’s see one of these successful stories in the life sciences. In the realm of life sciences, Genentech, a pharmaceutical company which is being part of Roche, set out on a groundbreaking endeavor to revolutionize drug discovery using Large Language Models (LLMs). Despite many challenges of sifting through a new innovative technology, the company turned to LLMs as a potential solution. This advancement of AI accelerates the experiments and leads life sciences in success, as opposed to the traditional drug development processes which are linear and sequential.
Genentech and Roche in collaboration with Recursion Pharmaceuticals, aim to maximize the potentials of a drug discovery platform coupled with the robust capabilities in single-cell data generation and machine learning. By combining these resources, they will create an extensive and all-encompassing approach to identify new drug targets. This collaboration holds the promise of accelerating the development of small molecule medicines as well as for setting the ground to the commitment in the digitization of drug discovery through advanced AI technologies.
Unraveling Generative AI, LLMs and their Distinction from AGI
Before we elaborate on the challenges and the advancements of these large language models in life science and healthcare, we should first decode and clarify the enigmatic terms of Generative AI, Foundation Model, Large Language Models (LLMs), and Artificial General Intelligence (AGI).
- Generative AI: is a subset of AI that focus on generating new content, such as text, images, audio, or video, that does not explicitly replicate existing examples, however it is similar to or indistinguishable from human-generated content. Generative AI models learn patterns from the training data and generate new outputs that possess similar characteristics.
- Foundation Model: is a model trained on vast amount of data that serves as a fundamental building block for various tasks. The term is used to describe a large language model that serves as the basis or foundation for developing domain-specific models. It refers to a pre-trained model that has been trained on a vast amount of text data and has a broad understanding of language across various domains. For example, OpenAI's GPT-3 and GPT-4, can be considered as foundation models. They are trained on a diverse range of internet text and can generate human-like text responses across multiple topics. These models can be fine-tuned for specific applications, such as chatbots, language translation, content generation, and more.
- Large Language Models (LLMs): are a type of generative AI model specifically designed for generating human-like text. Their capabilities extend also beyond that, such as language translation, sentiment analysis, named entity recognition, and many more. They are trained on large datasets of text and learn the statistical patterns and structures of language. LLMs can then generate coherent and contextually appropriate text based on the input provided to them. However, it's important to note that LLMs are based on statistical patterns and do not possess true understanding or consciousness. They lack common sense and may sometimes generate incorrect or nonsensical responses. Careful monitoring, filtering, and human oversight are necessary when using LLMs to ensure the quality and reliability of the generated content.
- Artificial General Intelligence (AGI): represents the pursuit of creating an AI system that can understand, learn, and apply knowledge across diverse domains, perform tasks with human-level competence, adapt to new situations, and exhibit reasoning, problem-solving, creativity, and social intelligence. AGI aims to surpass the limitations of narrow AI, which is designed for specific tasks or domains and lacks the ability to generalize knowledge or transfer skills to new situations.
The goal of AGI is to achieve a level of versatility and cognitive capability that surpasses specialized tasks, enabling it to work on a wide range of intellectual functions. While Generative AI focuses on creating specific models capable of solving specific problems, AGI aims for a more comprehensive and generalized form of intelligence that can operate across multiple domains. In other words, AGI represents the ultimate goal of achieving human-like intelligence in machines, whereas Generative AI contributes to the advancement of AI models that exhibit creative performance within narrower contexts.
The Challenges of LLMs
Developing and delivering a large language model (LLM) in a production environment is a task riddled with unique challenges and complexities. Let’s delve into the various obstacles faced when building and delivering a LLM in production, highlighting the intricate nature of this endeavor:
- Technical expertise: Developing and implementing LLMs requires a high level of technical expertise in natural language processing (NLP), machine learning, and deep learning. Acquiring or developing this expertise within an organization may pose a challenge if the development teams don't already possess some of the basic skills. The lack of knowledge and expertise could be mitigated by adapting the existing models instead of developing the models from scratch.
- Data availability and quality: Training LLMs requires large amounts of high-quality data. Obtaining the necessary data for training the models and ensuring its quality, relevance, and accuracy can be a significant challenge. The organizations that want to leverage from these models may need to explore partnerships, data acquisition strategies, or invest in data collection efforts.
- Ethical and legal considerations: LLMs can generate content that may be subject to legal and ethical concerns. Ensuring that the generated content adheres to legal and ethical standards, such as avoiding biased or discriminatory outputs, can be a complex challenge. Companies usually have to develop robust content filtering and moderation mechanisms to address these concerns.
- Cost for training, fine-tuning, and evaluating the model: Investing in the required hardware, cloud computing services, or partnering with specialized providers might be necessary and also challenging.
- Continuous delivery and monitoring: This involves ongoing research and development efforts to continuously improve the models, address limitations, incorporate user feedback, monitor performance, detect bias, model, and data drift, and more.
- User acceptance: Introducing LLMs as a marketing module may face resistance or skepticism from users who may be concerned about privacy, data security, or the reliability of the generated content.
- Integration and scalability: Integrating LLMs into existing marketing modules or systems may require significant technical integration work. Also, scaling up LLMs to handle increased demand or diverse use cases may require careful planning and extra resource allocation.
Addressing these challenges will require a strategic approach, collaboration between technical and business team member, and potentially seeking external expertise or partnerships. In addition, we, at MLAB, believe that for a digital transformation to thrive, it must be accompanied by a simultaneous transformation of the company's culture and operating model. It is crucial to foster a continuous learning environment across the entire team, establish a comprehensive MLOps strategy that is embraced by all key stakeholders, and embrace a culture that allows room for mistakes and encourages ongoing innovation. The operating model should encompass intelligent, automated decision-making processes and an optimized approach to DevOps and SRE (Site Reliability Engineering). We have delved into these concepts in more detail in a separate post.
Unleashing the Power of MLOps and SRE for Next-Level LLMs
Before we explain how MLOps and SRE enhance in the development and deployment phase of the large language models, you could read our previous blog posts about the effective MLOps framework and the effective SRE approach if you want to know more details.
SRE principles enable the development of resilient and scalable LLM systems, ensuring high availability and reliability. MLOps practices streamline the deployment and management of machine learning models, enhancing the accuracy and efficiency of the whole software architecture. Together, SRE and MLOps empower organizations to overcome challenges and achieve greater operational excellence in building, evaluating, and delivering LLM, ultimately by fostering a more secure and compliant environment.
Let’s see how some of the aforementioned challenges could be handled using SRE and MLOps best practices:
Gain control over cost for training, evaluating, and delivering LLMs: By implementing automated processes for resource allocation and monitoring, SRE helps optimize the infrastructure required for training and evaluating LLM models. This ensures efficient resource utilization, reducing costs associated with unnecessary hardware or cloud resource provisioning. MLOps practices enable organizations to streamline the end-to-end machine learning lifecycle, including model development, evaluation, and deployment. MLOps frameworks provide automation and standardization, reducing manual efforts and the associated costs. These frameworks also enable efficient versioning and reproducibility of models, minimizing duplication of work and reducing development costs.
Apply continuous delivery and monitoring: By combining SRE and MLOps, organizations can establish robust monitoring processes for LLM solutions. These procedures provide real-time insights into the performance of models, helping detect any issues that may arise. Additionally, they enable organizations to continuously gather data, supporting ongoing analysis and risk mitigation efforts.
Ensure data availability and quality: SRE and MLOps teams can work closely with data engineering teams to ensure that data pipelines are designed to handle high volumes of data, are scalable, and have proper redundancy mechanisms in place. This ensures continuous data availability for training, evaluating, and deploying large language models. In addition, MLOps frameworks provide capabilities for data preprocessing, cleansing, and transformation, allowing organizations to perform necessary data quality checks and ensure consistency and accuracy.
Respect data regulations and discoverability within an organization: SRE and MLOps teams can collaborate to establish a centralized data catalog or a data management strategy that organizes the available datasets, their sources, and their respective owners. This helps in maintaining a clear understanding of the data lineage, ownership, and compliance requirements.
How can you leverage Generative AI and LLMs in Life Sciences?
Generative AI and LLMs can offer valuable opportunities for leveraging advancements in life sciences. It is important to clarify at this point, that the goal is not to replace the researchers, but only to enhance their capabilities, accelerate the processes, and minimize potential errors. Here are several ways in which a company can utilize these technologies:
- Drug Discovery: LLMs can generate novel molecular structures based on existing compounds and known drug properties. These generated molecules can be screened for their potential therapeutic properties, allowing researchers to explore a wider chemical space and identify potential candidates for further testing.
- Biomarker identification: LLMs can analyze large volumes of biomedical literature and genomic data to identify potential biomarkers for diseases. By generating insights and patterns from the available data, LLMs can assist in identifying molecular markers that are associated with specific diseases or conditions.
- Clinical trial optimization: By analyzing patient data, medical records, and trial protocols, LLMs can generate personalized trial inclusion and exclusion criteria, improving patient selection. Additionally, LLMs can generate insights for identifying suitable trial sites, predicting patient recruitment rates, and optimizing trial protocols.
- Medical image analysis: LLMs can generate synthetic images to augment training datasets for training image recognition models. LLMs can also aid in image segmentation tasks, where they can generate detailed annotations for medical images, helping to automate and accelerate image analysis processes.
- Disease prediction and prognosis: LLMs can analyze electronic health records, patient data, and medical literature to identify patterns and risk factors associated with specific diseases. By recognizing correlations, LLMs can assist in early disease detection, allowing healthcare professionals to intervene proactively.
Machine Learning Architects Basel
Overcoming the challenges of implementing reliable and secure data- and ML-driven solutions in an organization requires a strategic approach that leverages SRE and MLOps. By integrating these disciplines efficiently, organizations can effectively manage legal and liability risks while harnessing the power of advanced technologies.
Machine Learning Architects Basel (MLAB) is a member of the Swiss Digital Network (SDN). Having pioneered the Digital Highway for End-to-End Machine Learning & Effective MLOps we have created frameworks and reference models that combine our expertise in DataOps, Machine Learning, MLOps, and our extensive knowledge and experience in DevOps, SRE, and agile transformations.
If you want to learn more about how MLAB can aid your organization in creating long-lasting benefits by developing and maintaining reliable data and machine learning solutions, don't hesitate to contact us.
We hope you find this blog post informative and engaging. It is part of our Next Generation Data & AI Journey powered by MLOps.
References and Acknowledgements
- The Digital Highway for End-to-End Machine Learning & Effective MLOps
- Introduction to Reliability & Collaboration for Data & ML Lifecycles
- Building LLM applications for production, by Chip Huyen
- The utility of ChatGPT as an example of large language models in healthcare education, research and practice: Systematic review on the future perspectives and potential limitations, by Sallam, Malik