logo

Tag : life science copywriting

Can Perplexity Reliably Answer Technical Questions in the Life Sciences?

The generative text AI tool Perplexity has rapidly gained popularity in the life sciences for its ability to show its references when answering. As those references can include scholarly articles, it has great potential as a literature review assistant for scientists. AIs are well-known for both hallucinations and inaccuracy, however, and being incorrect even a small percentage of the time is normally not something a professional scientist can allow.

We therefore decided to set up a little experiment: we asked three subject matter experts in the life sciences to ask perplexity a series of questions of increasing difficulty, then evaluate the answers:

  1. A question which a grad student should be able to answer relatively easily
  2. A question which most PIs in the field would be able to answer
  3. A question requiring very specific knowledge to answer. Something which a scientist wouldn’t know unless it was within their specific research area.

Here’s what they asked, what Perplexity had to say, verbatim, and the scientists’ take on Perplexity’s responses.

"Some quick notes on this experiment: The scientists were not asked to proof / correct Perplexity’s responses, but rather just to provide overall feedback, which is found below the questions. They all used the free version."

Steve Perrault, PhD, former Associate Director of Research & Development at Foundation Medicine

Q1: What cell type is targeted by mRNA vaccines?

mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells (APCs), particularly dendritic cells and macrophages.

Upon administration, the mRNA encapsulated in lipid nanoparticles is taken up by APCs. Once inside the APCs, the mRNA is released into the cytosol and translated into the encoded antigenic protein.

This antigenic protein is then processed and presented on the surface of the APCs in the context of major histocompatibility complex (MHC) molecules, allowing for the activation of antigen-specific T cells and initiation of an adaptive immune response.

The ability of mRNA vaccines to efficiently target and activate APCs in vivo has been demonstrated in studies using non-human primates like rhesus macaques.

These studies have shown that modified mRNA vaccines can effectively deliver their mRNA cargo to APCs, leading to robust antigen expression and subsequent activation of antigen-specific T cells.

Furthermore, the size of the lipid nanoparticles used for mRNA delivery plays a crucial role in their trafficking and uptake by APCs. Studies have shown that nanoparticles within a specific size range (around 20-100 nm) can efficiently drain into lymph nodes and penetrate the paracortex region, where APCs reside, facilitating their uptake and antigen presentation.

In summary, mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells, particularly dendritic cells and macrophages, enabling the expression of the encoded antigenic protein and subsequent activation of antigen-specific T cells, which is a key mechanism underlying their immunogenicity. mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells (APCs), particularly dendritic cells and macrophages.

Upon administration, the mRNA encapsulated in lipid nanoparticles is taken up by APCs. Once inside the APCs, the mRNA is released into the cytosol and translated into the encoded antigenic protein.

This antigenic protein is then processed and presented on the surface of the APCs in the context of major histocompatibility complex (MHC) molecules, allowing for the activation of antigen-specific T cells and initiation of an adaptive immune response.

The ability of mRNA vaccines to efficiently target and activate APCs in vivo has been demonstrated in studies using non-human primates like rhesus macaques.

These studies have shown that modified mRNA vaccines can effectively deliver their mRNA cargo to APCs, leading to robust antigen expression and subsequent activation of antigen-specific T cells.

Furthermore, the size of the lipid nanoparticles used for mRNA delivery plays a crucial role in their trafficking and uptake by APCs. Studies have shown that nanoparticles within a specific size range (around 20-100 nm) can efficiently drain into lymph nodes and penetrate the paracortex region, where APCs reside, facilitating their uptake and antigen presentation.

In summary, mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells, particularly dendritic cells and macrophages, enabling the expression of the encoded antigenic protein and subsequent activation of antigen-specific T cells, which is a key mechanism underlying their immunogenicity.

Q2: How do mRNA vaccines escape endosomes?

mRNA vaccines are delivered into cells encapsulated in lipid nanoparticles (LNPs). For the mRNA to be effective, it must escape from the endosomes into the cytoplasm where it can be translated into proteins. The endosomal escape process is facilitated by the ionizable lipids in the LNP formulation:

  • Upon endocytosis, the LNP becomes trapped in an endosome, which is an acidic compartment within the cell
  • The acidic environment of the endosome causes the ionizable lipids in the LNP to become positively charged
  • This positive charge allows the LNP to interact with and disrupt the negatively charged endosomal membrane through electrostatic interactions
  • The shape change and disruption of the endosomal membrane enables the release of the mRNA cargo from the endosome into the cytoplasm
  • Optimizing the pKa values and lipid tail structures of the ionizable lipids can enhance their ability to become protonated and facilitate endosomal escape
  • Early endocytic compartments like recycling endosomes have been identified as key sites for productive mRNA escape, whereas prolonged retention in late endosomes leads to mRNA degradation
  • So in summary, the ionizable lipids undergo a charge-induced shape change in the acidic endosome that allows the LNP to disrupt the endosomal membrane and release its mRNA cargo into the cytosol for translation

mRNA vaccines are delivered into cells encapsulated in lipid nanoparticles (LNPs). For the mRNA to be effective, it must escape from the endosomes into the cytoplasm where it can be translated into proteins. The endosomal escape process is facilitated by the ionizable lipids in the LNP formulation:

  • Upon endocytosis, the LNP becomes trapped in an endosome, which is an acidic compartment within the cell
  • The acidic environment of the endosome causes the ionizable lipids in the LNP to become positively charged
  • This positive charge allows the LNP to interact with and disrupt the negatively charged endosomal membrane through electrostatic interactions
  • The shape change and disruption of the endosomal membrane enables the release of the mRNA cargo from the endosome into the cytoplasm
  • Optimizing the pKa values and lipid tail structures of the ionizable lipids can enhance their ability to become protonated and facilitate endosomal escape
  • Early endocytic compartments like recycling endosomes have been identified as key sites for productive mRNA escape, whereas prolonged retention in late endosomes leads to mRNA degradation
  • So in summary, the ionizable lipids undergo a charge-induced shape change in the acidic endosome that allows the LNP to disrupt the endosomal membrane and release its mRNA cargo into the cytosol for translation

Q3: What’s the structure of the ionizable lipid used for endosome escape with mRNA vaccines?

The ionizable lipids used in mRNA vaccines like the Pfizer and Moderna COVID-19 vaccines have a unique structure that facilitates endosomal escape and delivery of mRNA into cells.

Key Structural Features:

Ionizable Head Group: The lipids contain a protonatable or ionizable amino head group, typically a tertiary amine. This allows the lipid to be neutral at physiological pH but become positively charged in the acidic environment of endosomes.

Branched Lipid Tails: The lipids have branched or multi-tail lipid chains, rather than linear tails. This branched structure is believed to promote a cone-shaped molecular geometry that aids in endosomal membrane destabilization and escape.

Pfizer’s ALC-0315 has four short branched tails.

Moderna’s SM-102 has three branched tails, one of which is longer.

Ester Linkers: The lipid tails are often connected to the head group via ester linkers, which can influence the biodegradability and potency of the lipid.

The combination of the ionizable head group, branched tails, and ester linkers allows these lipids to effectively complex with mRNA, facilitate endosomal escape through the proposed “proton sponge” effect and membrane destabilization, and enable cytosolic delivery of the mRNA cargo.

Bonus historical science question: How did DNA nanotechnology start?

DNA nanotechnology originated from a landmark theoretical paper published in 1982 by Ned Seeman, which proposed the idea of using DNA molecules to create nanostructures.

This seminal work laid the foundation for the field and sparked the development of various techniques and applications in DNA nanotechnology.

The key idea proposed by Seeman was to exploit the predictable base-pairing interactions of DNA to construct artificial nanostructures with precise control over their shape and size.

He envisioned using synthetic DNA strands designed to self-assemble into desired geometries, taking advantage of DNA’s inherent programmability and molecular recognition capabilities.

The first experimental demonstration of this concept came in the early 1990s when Seeman and his colleagues successfully constructed DNA-based nanostructures, such as a cube and a truncated octahedron.

These initial achievements paved the way for the rapid growth of DNA nanotechnology, leading to the development of various techniques like DNA origami, DNA tiles, and DNA nanorobots.

Over the years, DNA nanotechnology has evolved into an interdisciplinary field, bringing together researchers from diverse areas like chemistry, physics, materials science, and computer science to explore the potential of DNA-based nanostructures for various applications, including nanoelectronics, biosensing, drug delivery, and molecular computing.

Steve’s take on Perplexity:

Steve said Perplexity was “pretty solid” but criticized it for a “lack of context, like if you’re in that field and wanting to understand a bigger picture based on a question, that isn’t there.”

But on a super reductionist view it seems pretty good. Like, what someone might give on an exam answer after studying, but not what someone who really knows a field should hopefully say to provide that bigger picture.

I don’t see anything “wrong” but someone who really knows their stuff would give a broader answer. This comes across as a pre-med major who’s really good at studying, and the tech details are accurate.

For the second two questions, if someone asked me this I would give a context of this solving a problem that had been around since 1960s, was solved by someone at UBC, how it was solved, why it matters. Things that provide way more depth.

Regarding the bonus historical question on the origins of DNA nanotechnology which he decided to ask, he said: “This is accurate but misses the nuances that one can only know from really diving into it. I wouldn’t expect AI to come up with that. Probably not possible. But the technical details are right.”

Hussain Dahodwala, PhD, Associate Professor at the IBBR Institute for Bioscience and Biotechnology Research and former Director, Upstream Process Development at the National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL)

Q1: What are the challenges in enzyme engineering and what advances are anticipated in enzyme engineering for biobetters?

The key challenges and advances in enzyme engineering for biobetters are:

Challenges in Enzyme Engineering:
ā€¢ Enzymes often have poor stability under industrial conditions, such as high temperatures, pressures, and organic solvents. This leads to poor product yields.
ā€¢ Enzymes may have narrow substrate specificity, limiting their ability to work with non-natural substrates.
ā€¢ Enzymes can be difficult to produce at scale in industrial fermentation processes.

Advances in Enzyme Engineering:
ā€¢ Enzyme engineering techniques like directed evolution, rational design, and semi-rational design are being used to improve enzyme properties such as activity, specificity, stability, and stereoselectivity.
ā€¢ Computational methods and machine learning are helping to expand the enzyme design space and accelerate the discovery of novel enzymes with desired properties.
ā€¢ Strategies like enzyme immobilization, molecular modifications, and the use of nanoparticle or cell-based delivery systems are being explored to enhance the stability and performance of enzyme therapeutics.
ā€¢ Advances in biotechnology and recombinant expression are enabling the industrial-scale production of engineered enzymes.

Overall, enzyme engineering is a rapidly advancing field that is critical for developing efficient, stable, and versatile biocatalysts to enable sustainable and cost-competitive bio-based processes.

Q2: What is the inspiration for Biomanufacturing 4.0 and how is it different from current biomanufacturing?

Biomanufacturing 4.0, also known as Pharma 4.0, is inspired by the broader Industry 4.0 concept and is focused on the digital transformation of biomanufacturing processes. The key differences between Biomanufacturing 4.0 and current biomanufacturing practices are:

  1. Increased use of advanced digital technologies: Biomanufacturing 4.0 leverages technologies like artificial intelligence, machine learning, digital twins, and advanced process control to enable greater process visibility, optimization, and predictive capabilities.
  2. Emphasis on data-driven decision making: Rather than relying solely on empirical knowledge, Biomanufacturing 4.0 aims to incorporate data analytics and modeling to make more informed, data-driven decisions throughout the product lifecycle.
  3. Shift towards continuous improvement: Biomanufacturing 4.0 promotes a culture of continuous improvement, enabled by the increased process understanding and agile regulatory frameworks that allow for more efficient implementation of process changes.
  4. Enhanced supply chain robustness: The advanced process control and predictive capabilities of Biomanufacturing 4.0 can lead to improved supply chain reliability and the ability to better manage variability, ultimately enhancing product quality and availability.
  5. Accelerated product development: The digital tools and data-driven approach of Biomanufacturing 4.0 can help compress product development timelines, allowing for faster access to life-changing therapies.

In summary, Biomanufacturing 4.0 is inspired by the broader Industry 4.0 vision and aims to transform biomanufacturing through increased digitalization, data-driven decision making, and a focus on continuous improvement and supply chain robustness.

Q3a: What are non-canonical redox co-factors and how do we use them in cell culture?

Non-canonical redox cofactors (NRCs) are chemical compounds that can serve as alternatives to the natural cofactors like NAD(P)+ and NAD(P)H in biological redox reactions. Some key points about NRCs:
NRCs can overcome the limitations of natural cofactors, enabling specific delivery of reducing power outside of biological constraints and providing alternative reduction potentials to overcome thermodynamic barriers.

Examples of NRCs include 3-carbamoyl-1-phenethylpyridin-1-ium chloride (P2NA+) and 3-acetylpyridine adenine dinucleotide, which have been used in purified enzyme-based redox catalysis to increase economic viability and drive thermodynamically unfavorable reactions, respectively.

Researchers have developed methods to engineer microbes like E. coli to biosynthesize NRCs like nicotinamide mononucleotide (NMN+) to enable their full integration with renewable biomanufacturing processes. This involves developing growth-based screening platforms and exploring pathway combinations to achieve high intracellular NMN+ levels.

Using NRCs in cell culture can provide more economical and scalable biotransformation, as well as enable exquisite control and expand the range of chemistries accessible compared to natural cofactors. However, the impact of NRC accumulation on cell fitness must also be considered.

In summary, NRCs are emerging as important tools in cell-free biosynthesis and metabolic engineering to overcome the limitations of natural cofactors and enable new capabilities in biocatalysis and biomanufacturing

Perplexity did not attempt to answer the second part of the question, and Hussein made another attempt to get it to answer.

Q3b: How to use non canonical redox co-factors in cell culture?

Based on the search results, here is how non-canonical redox co-factors can be used in cell culture: Metabolites and post-translational modifications (PTMs) can regulate cell signaling and gene expression through non-canonical mechanisms. Metabolites can serve as co-factors for enzymes and modify proteins through PTMs like palmitoylation, glutathionylation, and acetylation. These PTMs can alter protein interactions, stability, and function, thereby aligning cellular processes with changes in metabolic state. For example, glutathionylation of the transcription factor NF-ĪŗB decreases its ability to bind DNA, linking redox state to gene expression. Histone modifications like lactylation and crotonylation can also regulate gene transcription in response to metabolic cues. Additionally, palmitoylation can increase protein stability by blocking ubiquitination and proteasomal degradation. Several studies have demonstrated the use of non-canonical redox co-factors to modulate cell signaling and death pathways in cell culture: 

  • Dihydroartemisinin, a semi-synthetic derivative of artemisinin, induced ferroptosis in pancreatic cancer cells by altering redox homeostasis and lipid peroxidation.
  • The natural product 11-methoxytabersonine triggered necroptosis in glioblastoma cells through ROS generation, calcium signaling, and mitochondrial dysfunction.
  • Xanthohumol, a chalcone from hops, activated the Nrf2 antioxidant pathway while inhibiting NF-ĪŗB in pancreatic cancer cells, leading to reduced inflammation and oxidative stress.
  • Sulforaphane, an isothiocyanate from cruciferous vegetables, upregulated Nrf2 and antioxidant enzymes while downregulating NF-ĪŗB in breast cancer and macrophage cell lines.

In summary, non-canonical redox co-factors like metabolites and their PTMs can be leveraged in cell culture to modulate signaling pathways, gene expression, and cell death mechanisms in a targeted manner by exploiting their unique biochemical properties.

Hussain’s take:

These are questions that I regularly encounter in scientific conferences and use them to test graduate student knowledge. I can say that these questions can be answered by biotechnologists with a few years of experience.

The questions chosen were 2 parts each. One part would require” book learning” and the second part would require domain knowledge, some experience and “intelligence” in order to answer.

In all 3 examples: Perplexity AI did an acceptable job in text mining the first half of the question. The definitions and concepts asked were answered more accurately and with more clarity, including citation than one would expect humans to remember. 

Complexity of jargon was increased with each question and in each case Perplexity AI was able to text mine the definitions. For reference, in my experience many grad students fumble at the third question.

However in every question, perplexity AI ignored the second half of the question beyond the definitions. In other words it does great at “What is” and fails at “how to”.

Separating the questions was attempted with the third question, and AI failed to give an answer.

Chris Bjornsson, PhD, Embedded 3D & Automated Microscopy Specialist at ZEISS.

Q1: How many tissue types are there in the human body?

There are four main types of tissues in the human body: epithelial, connective, muscle, and nervous tissues.

Epithelial Tissues
Epithelial tissues are sheets of cells that cover exterior surfaces of the body, line internal cavities and passageways, and form glands. Their main functions include protection, secretion, absorption, and excretion.

Connective Tissues
Connective tissues bind together and support other tissues. They include bone, cartilage, blood, adipose (fat), and fibrous connective tissues like tendons and ligaments. Their main functions are to bind, support, protect, insulate, store energy, and produce blood cells.

Muscle Tissues
Muscle tissues are composed of cells called muscle fibers that have the ability to contract or shorten, allowing for movement. There are three types: skeletal (voluntary), smooth (involuntary), and cardiac (heart muscle).

Nervous Tissues
Nervous tissues are composed of neurons and neuroglial cells that propagate electrochemical signals called nerve impulses, enabling communication within the body and the processing of responses. The different tissue types work together to perform the various functions required for the human body to operate and maintain homeostasis.

Q2: How many orders of light does it take to form a microscope image?

To form a microscope image, it takes at least two orders of light according to the Abbe theory of image formation. The key points are: 

  • According to Ernst Abbe’s theory, for a specimen detail to be resolved in the microscope image, the objective lens must capture at least two orders of diffracted light from the specimen, such as the 0th order (undiffracted light) and the 1st order of diffraction.
  • The more diffracted orders that are captured by the objective lens, the more accurately the image will represent the original specimen details.
  • Using immersion oil between the objective lens and specimen allows more diffracted orders to be captured compared to air, resulting in higher resolution. 
  • Blue light is diffracted at a smaller angle than red or green light, so objectives can capture more diffracted orders of blue light, further improving resolution.

So in summary, capturing just the 0th order undiffracted light is insufficient to form a resolved image. At minimum, the 0th and 1st diffracted orders must be captured by the objective lens to resolve specimen details according to Abbe’s theory.

Q3a: How does the wavelets model work to create a maximum intensity projection?

Wavelets are not directly used to create maximum intensity projections (MIPs) in medical imaging. MIPs are created by taking the maximum voxel value along a particular viewing direction and projecting it onto a 2D image plane. The wavelet transform is a mathematical tool that decomposes a signal (like an image) into different frequency components at different scales or resolutions. It can be useful for tasks like image denoising, compression, and feature extraction, but it is not the primary technique used for MIP rendering. MIPs are typically computed directly from the 3D volumetric image data by the following steps: 

  1. The viewing direction is chosen (e.g. axial, coronal, sagittal).
  2. For each pixel in the 2D projection plane, the maximum voxel value is found along the corresponding ray through the 3D volume in the chosen viewing direction.
  3. This maximum value is assigned to that pixel, creating the MIP image.

So in summary, while wavelets have many applications in image processing, the maximum intensity projection algorithm operates directly on the voxel data itself by finding the maximum value along rays through the volume. Wavelets are not an inherent part of the MIP rendering process.

Q3b: How does the wavelets model work to create an extended depth of focus?

The wavelet model works to create an extended depth of focus (EDF) image by combining in-focus regions from a stack of images captured at different focal planes using the wavelet transform. The key steps are: 

  1. A stack of images is acquired by capturing the same scene at different focal planes along the optical axis. 
  2. The wavelet transform is applied to each image in the stack, decomposing them into different frequency subbands at multiple scales.
  3. For each spatial location and wavelet subband, the coefficient with the maximum absolute value across the image stack is selected. This selects the in-focus information from the stack.
  4. The selected wavelet coefficients are used to reconstruct a composite wavelet representation, which corresponds to an EDF image containing the in-focus regions from across the stack.
  5. Optional post-processing steps like enforcing local smoothness of the depth map and avoiding saturation/noise accumulation can further improve the results.

The key advantage of using wavelets is that they provide a multi-scale representation that can effectively isolate in-focus and out-of-focus regions based on their different frequency characteristics. The maximum selection rule across the stack allows combining the in-focus information into a single composite image with an extended depth of field. This wavelet-based approach, especially the complex wavelet variant, has demonstrated state-of-the-art performance for EDF microscopy imaging compared to other traditional algorithms.

Chris’ take:

I’m impressed! I even tried to mess it up on the 3rd question but it caught me. Fast, too. I’ll actually start using it and spreading the word!

There was one error, i should mention, when it comes to how many orders of light are needed to form an imageā€¦it can be any two orders, and doesnā€™t necessarily need to include the 0th order. It can be 0+1 or 1+2 or 2+3 etcā€¦ The first part of that answer was correct, but the final summary statement was wrong.

Takeaways

Perplexity isn’t perfect, and is likely better at answering “what is” type questions than “how to” type questions. If you want to get a lot of context behind the answers, you’ll likely have to dig a bit. However, in these experiments it did seem to be relatively accurate, with few obvious errors. Used with caution, it can make a solid research companion and speed up literature review considerably.

"Scientists are increasingly using AI, which means that you need to be planning for their AI-enabled future. You’ve heard of search engine optimization, but how much do you know about AI optimization? Don’t be intimidated, just partner with BioBM. We stay at the forefront of organic optimization techniques so that you can stay at the forefront of scientists’ product and service discovery. Contact us today."

AI-based Language Models: the End of Life Sciences Copywriters?

On November 30th, 2022, the world witnessed a technological revolution that would forever alter the landscape of content generation and communication. It was a day that will be remembered as the birth of a digital entity that came to be known as “Chat Generative Pre-Trained Transformer,” or simply ChatGPT. Some likened it to a harbinger of a new era, while others couldn’t help but draw parallels with the fictional dystopian Artificial neural network-based conscious group mind and artificial general superintelligence system known as Skynet from the Terminator franchise.

OpenAI, the visionary organization behind this innovation, had unleashed ChatGPT onto the digital stage, equipped with an astounding arsenal of knowledge. This cutting-edge AI model had ingested a staggering 570 gigabytes of data from various sources across the internet, comprising a mind-boggling 300 billion words. It was the culmination of years of research and development, resulting in a language model capable of understanding and generating human-like text on a scale never before seen.

As ChatGPT took its first virtual breath, it marked the beginning of a new chapter in the world of life science copywriting and content creation. This AI juggernaut had one goal (for the sake of this blog postā€™s plot at least): to revolutionize the way businesses and individuals interacted with written content. Gone were the days of struggling to craft compelling copy, as ChatGPT was poised to become the ultimate tool for content creators worldwide. 

The global response was swift and diverse. Some celebrated ChatGPT as a boon, an AI-driven ally that would transform the way we communicate and create content, making it more efficient and accessible. Others, however, raised concerns about the implications of such advanced technology, fearing the potential for misuse and its impact on employment in the creative industry. In todayā€™s blog post, this is exactly what weā€™ll be discussing: The rise of different AI-based language models (including ChatGPT, Gemini, Phind, and more), their advantages, and more importantly, their limitations in the world of life sciences copywriting, to ultimately answer the question that all of us have been asking ourselves: Are AI-based Language Models the end of Life Sciences Copywriters? 

No, they are not. And please excuse our unorthodox approach to this blog post, we know we should build things up throughout the post to keep you hooked, and deliver the final answer in the end, but our copywriters took it personally. Regardless, weā€™ll be pitting ourselves against ALMs in the most unbiased way possible: exposing the human limitations of using ALMs, before discussing ALMsā€™ limitations and why we believe they are still a long way from taking our jobs away. 

Proper Prompt Engineering: 

Put simply, the output of ALMs are as good as the input you give them. You need to design a prompt to feed into the ALM that is detailed, yet not convoluted, to get an optimal output, which might sound simple and straightforward, but it really isnā€™t. Here are a few tips to writing a solid prompt: 

  1. Know what your objective is. It might be obvious to you, but you need to make sure the ALM understands it too. 
  2. Provide contextual details. 
  3. Keep it clear and concise.
  4. Provide the ALM with reference material (keep in mind the limitations of some ALMs, like ChatGPT, who only has access to information pre-september 2021).
  5. Set constraints. (e.g: use references dating from 2015 forward)
  6. Specify the desired format.

So next time youā€™re not satisfied with your ALMā€™s output, make sure you give it a solid prompt first. However, the question that now asks itself is, is a solid prompt still enough to produce life sciences copy that is on par with what you are writing? Let’s compare some ALM output to human-written copy on the website of a mid-sized CRO for their preclinical ophthalmology services. 

Round 1

What they wrote:
HEALTHY EYES TO ENVISION A CLEARER FUTURE

In the fourth century B.C, Plato wrote that light emanated from the eye, seizing objects in its ray, which allowed us to see them. Centuries later, our understanding of ocular anatomy and physiology has drastically improved. In addition, what was once perceived by humans of antiquity as a punishment from the gods, is today perceived as what it truly is: pathological processes that stem at the molecular level, and manifest all the way up to the systemic level.

Ophthalmic diseases have the potential to take away what many agree is the most important and valued of the 5 senses: our vision.

At […], we have a vision. A vision of a world where ophthalmic diseases are history, just like Platoā€™s extramission theory. With your pipelines, our expertise, and a wide selection of models with immediate availability, we believe that our vision might one day come true. Letā€™s start your study today.

The prompt we used:

I need you to write me the title and first paragraph (around 170 words in total) of a landing page copy. The company you are writing for  is a CRO. The landing page at hand is an ophthalmology models landing page.

ChatGPTā€™s output:

Title: “Revolutionize Ophthalmic Research with […] Cutting-edge CRO Solutions”

Paragraph:

Welcome to […], your trusted partner in advancing ophthalmic research through unparalleled Contract Research Organization (CRO) services. As a leader in the field, we understand the critical importance of innovative models in unraveling the complexities of eye-related disorders and developing breakthrough therapies. Our Ophthalmology Models Landing Page is your gateway to a diverse range of meticulously designed preclinical models that precisely mimic the intricacies of ocular conditions. At […], we believe in driving scientific progress, and our collection of ophthalmic models stands at the forefront of transformative research. With a commitment to excellence, our team of experts collaborates with you to accelerate drug development, optimize trial outcomes, and propel your ophthalmology projects to new heights. Explore our cutting-edge models, elevate your research capabilities, and join us in shaping the future of ophthalmic medicine. […]: Pioneering CRO Solutions for Visionary Research.

Geminiā€™s output:

Title: Decoding the Future of Sight: […]’s Cutting-Edge Ophthalmology Models

Unveiling clarity in clinical trials. For decades, ophthalmology research has faced unique challenges – complex disease models, diverse patient populations, and the inherent fragility of the visual system. At […], we understand these nuances. That’s why we’ve pioneered a suite of next-generation ophthalmology models designed to unlock unparalleled precision and efficiency in your clinical trials.

From age-related macular degeneration to diabetic retinopathy, our advanced models encompass a spectrum of eye diseases. We leverage human iPSC-derived retinal organoids, mimetics, and in vivo models, each meticulously calibrated to deliver accurate, predictive data. With […]’s ophthalmology models, you can gain deeper insights into disease mechanisms, test novel therapies with confidence, and navigate the path to regulatory approval with clarity.

Dive deeper into our models and discover how […] can empower your vision for brighter tomorrow.

Phind V9ā€™s output:

Title: “[…]: Revolutionizing Ophthalmology Models”

Welcome to […], your go-to partner for state-of-the-art ophthalmology models. With our team of seasoned professionals, we strive to push the boundaries of ophthalmology research, providing accurate and reliable models to drive progress. Unlike others, we offer a unique selling proposition (USP) that distinguishes us in the market. Our commitment to quality and efficiency ensures optimal results for our clients.

At […], we focus on a specific niche within ophthalmology, creating innovative models that significantly contribute to the understanding and treatment of eye diseases. Our models, born from years of rigorous research and development, are meticulously tested to ensure they provide valuable insights.

Join us today and experience the future of ophthalmology models. Whether you’re a researcher seeking advanced models for your studies or a clinician looking for reliable data, […] offers you the tools and resources needed to enhance patient care.

Discussion:

While both ChatGPT and Phind’s responses fell short in their execution, Gemini managed to provide a somewhat acceptable output. ChatGPT and Phind’s attempts were riddled with cliches and lacked depth, offering only surface-level information. Their copy resembled generic marketing material, failing to spark any real interest or present the information in a compelling way. In contrast, Gemini’s output, while not exceptional, avoided the pitfalls of the others. It conveyed the necessary information in a clear and concise manner, without resorting to overused phrases or basic wording.

Round 2

The prompt we used:

Write me a 150 word introduction to a blog post. The blog post will cover the use of vacuum in rotary evaporation. It will be published on the website of a company that sells rotary evaporators. Your target audience is lab managers, scientists that work in labs, or pretty much anyone who might be interested in purchasing a rotary evaporator. 

ChatGPTā€™s output:

Welcome to our comprehensive guide on the vital role of vacuum in rotary evaporation! As lab managers and scientists, we understand the paramount importance of efficiency, precision, and reliability in laboratory processes. In this post, we delve into the intricate workings of rotary evaporators, shedding light on the indispensable function of vacuum in distillation and concentration procedures.

Whether you’re a seasoned researcher or new to the world of laboratory equipment, understanding the principles behind rotary evaporation is essential. We’ll explore how vacuum manipulation facilitates faster evaporation rates, lower boiling points, and ultimately, superior sample recovery. From solvent recovery to cannabinoid extraction and beyond, the utilization of vacuum technology revolutionizes the way we approach scientific experimentation.

Join us as we navigate through the ins and outs of vacuum-assisted rotary evaporation, empowering you to make informed decisions when selecting the ideal equipment for your laboratory needs.

Geminiā€™s output:

Title: Unveiling the Power of Vacuum: Optimizing Efficiency in Rotary Evaporation

The rotary evaporator, a cornerstone in countless labs, excels at solvent removal and sample concentration. But did you know that the magic truly lies in the silent partner ā€“ vacuum pressure? Mastering this aspect unlocks a new level of efficiency and precision in your rotary evaporation processes.

This blog post delves into the intricacies of vacuum in rotary evaporation. We’ll explore how reduced pressure dramatically lowers the boiling point of solvents, enabling faster evaporation at gentler temperatures. Discover how a well-optimized vacuum system safeguards your temperature-sensitive samples and facilitates the removal of stubborn high-boiling-point solvents.

We’ll also shed light on choosing the appropriate vacuum pump for your specific needs, ensuring seamless integration with your rotary evaporator. By the end, you’ll be equipped with the knowledge to leverage the full potential of your rotary evaporator and achieve optimal results in your research endeavors.

Phind V9ā€™s output:

In the realm of laboratory science, rotary evaporators, or “rotovaps,” are indispensable tools for concentrating chemical and biological samples. These devices, found in nearly all laboratories across industries such as chemical, pharmaceutical, food, and environmental sectors, are essential for processes requiring sample concentration or solvent distillation. The use of vacuum in rotary evaporation is a critical feature that enhances the efficiency and versatility of these instruments. By creating a reduced internal pressure above a solution, vacuum technology lowers the boiling point of solvents, enabling their distillation at lower temperatures. This not only preserves the chemical structure of the solute or analyte but also allows for the recycling of solvents, reducing waste and expense. This blog post will delve into the intricacies of vacuum in rotary evaporation, exploring its role in enhancing sample concentration, solvent recovery, and the overall efficiency of laboratory processes. Whether you’re a lab manager, a scientist, or simply someone interested in the latest advancements in rotary evaporator technology, this article will provide valuable insights into how vacuum technology is revolutionizing the field of sample concentration and solvent recovery.

Discussion:  

For this round, weā€™ll go over four parameters in each of the ALMā€™s responses: The tone, the depth, the target audience, and the overall copy: 

ChatGPTā€™s output:

Tone: Whimsical and over the top

Depth: Provides an overview of the topic with emphasis on the importance of vacuum in rotary evaporation.

Target Audience: Appears to target a broad audience, from seasoned researchers to newcomers in the field.

Overall Copy:  The copy could benefit from a more engaging tone to resonate better with the target audience; it lacks elements that would create a sense of urgency or excitement. All in all, itā€™s very superficial

Geminiā€™s output:

Tone: Engaging and slightly more informal.

Depth: Offers a detailed exploration of the role of vacuum in rotary evaporation, including information on choosing the right vacuum pump.

Target Audience: Likely aimed at researchers or lab technicians seeking to optimize their rotary evaporation processes.

Overall Copy: Compared to the other ALMs, this version resonates most clearly with the target audience compared to the others. Itā€™s far more engaging, and has a more ā€œhumanā€ ring to it. All in all, Gemini did a fair job, and the copy it produced is probably usable.

Phind V9ā€™s output:

Tone: Formal and technical.

Depth: Provides a comprehensive overview of vacuum in rotary evaporation, covering its applications across various industries and its benefits in sample concentration and solvent recovery.

Target Audience: Appears to target professionals in the field of laboratory science, including lab managers and scientists.

Overall Copy: While informative, the last draft could benefit from a more dynamic tone to capture the readerā€™s attention. 

Final Round:

For the final round, we asked every ALM to back-up the copy that it formulated in Round 2, with articles, papers, etc. 

ChatGPT’s output:

As an AI language model, I don’t have direct access to my training data or know where it came from. However, I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. My knowledge is based on the data used to train me and my ability to generate language-based responses. I can generate text based on the prompts I receive and provide information and answer questions to the best of my knowledge and abilities. If you have any concerns about the information I provide, please feel free to let me know, and I’ll do my best to address them.

Gemini’s output:

Importance of Rotary Evaporation:

Role of Vacuum in Rotary Evaporation:

  • Science Direct – Rotary Evaporation: [invalid URL removed] This article from ScienceDirect discusses the boiling point principle and the role of reduced pressure in achieving lower boiling points for solvents.
  • Labsphere – A Guide to Rotary Evaporation: [invalid URL removed] This guide highlights the importance of a well-optimized vacuum system for protecting temperature-sensitive samples.

Vacuum Pump Selection:

  • Knf LabTech – Selecting a Rotary Vane Vacuum Pump: [invalid URL removed] This resource from a leading manufacturer provides guidance on choosing an appropriate vacuum pump for rotary evaporation based on specific needs.

Overall Benefits:

  • Royal Society of Chemistry – Rotary Evaporation: [invalid URL removed] The Royal Society of Chemistry emphasizes how mastering rotary evaporation techniques can lead to increased efficiency and improved research outcomes.

Phind V9’s output:

A key feature of PhindV9 is its integration of source attribution on its interface: everytime it provides you with an output, it automatically shares the list of sources that it used to provide you with said output (see image below), making it easier to fact-check everything it gives you. 


Discussion:

When evaluating large language models for informative tasks, phindv9 stands out for its ability to provide users with direct links to the sources it uses in every response. This allows users to quickly verify the information and delve deeper if desired. While other models, like Gemini, may offer general links to resources (most of which had invalid URLs in our example), Phind V9’s focus on direct source attribution streamlines the research process. It’s important to note that not all models can provide this functionality, as evidenced by ChatGPT’s current limitations in incorporating real-world data.

Conclusion:

Lack of Nuance: The life sciences field thrives on precision and nuance. Technical vocabulary, complex concepts, and ethical considerations demand a depth of understanding that AI models, despite their vast data stores, often lack. This can lead to inaccurate or misleading copy, a potentially dangerous pitfall in a field where clarity is paramount.

The Human Touch: The best life science copywriting resonates with the audience. It speaks to their fears, hopes, and aspirations in a way that is both informative and emotionally engaging. This is where the human touch remains irreplaceable. AI can generate text, but it cannot infuse it with the empathy and understanding that a skilled copywriter can.

Creative Roadblocks: While AI excels at generating standard content formats, it often struggles with the truly creative. Brainstorming unique ideas, crafting compelling narratives, and breaking through creative roadblocks are still the domain of human ingenuity. AI can be a valuable tool in the process, but it is not a substitute for the human imagination.

Time needed to generate a good prompt: While ALMs offer the potential to save time on writing, using them effectively often requires some back-and-forth. You might need to refine your prompts and evaluate the outputs several times. This iterative process can be valuable, but consider the time investment. Ultimately, the question is this: is it more efficient to create a detailed prompt to get the desired results from the ALM, or to write the entire piece yourself?