We’re the agency you turn to when the answer to your marketing problems isn’t “throw more money at it.” We’re the agency you turn to when you’re facing challenging situations that require creative solutions to be successful. And we’re the agency you turn to when you’re looking for marketing to not only help you achieve success, but actually create strategic advantage for your company.
BioBM's blog has been around as long as BioBM - since 2010 - so there's a lot in it. We recommend using our Resource Center if you're looking for information on a particular topic, but if you can't find it there our Blog Archives will make it easier to find what you're looking for.
There has been a lot of talk about AI optimization in the marketing world, much of which was spurred by the release of a preprint article published to arXiv (pdf) in September which demonstrated that LLMs could be manipulated to increase product visibility. There is even a term for optimizing for search engines: Generative Engine Optimization, or GEO. Of course, we are immediately interested in whether any of this is meaningful to marketers in the life sciences.
Our friends at Laboratory Supply Network recently beat us to the punch and asked Reddit’s Labrats community if they use LLMs to help them find scientific products. Good question! Apparently it is also one with a clear answer.
This is a relatively small poll, but the results are so skewed that it is likely that the result is telling. In this poll, 80% of scientists responded that they never use AI for product discovery: literally zero percent of the time! Another 14% barely ever use it. Only two respondents said they use it roughly 10% of the time or more, with one saying they use it more than half the time.
Some of the comments indicate that scientists simply don’t see any relative value in AI for scientific product discovery, or see much better value from other means of product discovery.
Maybe there will be a day where scientists adopt AI for product discovery in meaningful numbers, but it seems we aren’t there yet.
"Want scientists to discover your products and services? Contact BioBM. Our efficient and forward-looking demand generation strategies give life science companies the edge to get ahead and stay ahead. The earlier you engage with us, the more we can help. Work with BioBM."
The generative text AI tool Perplexity has rapidly gained popularity in the life sciences for its ability to show its references when answering. As those references can include scholarly articles, it has great potential as a literature review assistant for scientists. AIs are well-known for both hallucinations and inaccuracy, however, and being incorrect even a small percentage of the time is normally not something a professional scientist can allow.
We therefore decided to set up a little experiment: we asked three subject matter experts in the life sciences to ask perplexity a series of questions of increasing difficulty, then evaluate the answers:
A question which a grad student should be able to answer relatively easily
A question which most PIs in the field would be able to answer
A question requiring very specific knowledge to answer. Something which a scientist wouldn’t know unless it was within their specific research area.
Here’s what they asked, what Perplexity had to say, verbatim, and the scientists’ take on Perplexity’s responses.
"Some quick notes on this experiment: The scientists were not asked to proof / correct Perplexity’s responses, but rather just to provide overall feedback, which is found below the questions. They all used the free version."
Steve Perrault, PhD, former Associate Director of Research & Development at Foundation Medicine
Q1: What cell type is targeted by mRNA vaccines?
mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells (APCs), particularly dendritic cells and macrophages.
Upon administration, the mRNA encapsulated in lipid nanoparticles is taken up by APCs. Once inside the APCs, the mRNA is released into the cytosol and translated into the encoded antigenic protein.
This antigenic protein is then processed and presented on the surface of the APCs in the context of major histocompatibility complex (MHC) molecules, allowing for the activation of antigen-specific T cells and initiation of an adaptive immune response.
The ability of mRNA vaccines to efficiently target and activate APCs in vivo has been demonstrated in studies using non-human primates like rhesus macaques.
These studies have shown that modified mRNA vaccines can effectively deliver their mRNA cargo to APCs, leading to robust antigen expression and subsequent activation of antigen-specific T cells.
Furthermore, the size of the lipid nanoparticles used for mRNA delivery plays a crucial role in their trafficking and uptake by APCs. Studies have shown that nanoparticles within a specific size range (around 20-100 nm) can efficiently drain into lymph nodes and penetrate the paracortex region, where APCs reside, facilitating their uptake and antigen presentation.
In summary, mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells, particularly dendritic cells and macrophages, enabling the expression of the encoded antigenic protein and subsequent activation of antigen-specific T cells, which is a key mechanism underlying their immunogenicity. mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells (APCs), particularly dendritic cells and macrophages.
Upon administration, the mRNA encapsulated in lipid nanoparticles is taken up by APCs. Once inside the APCs, the mRNA is released into the cytosol and translated into the encoded antigenic protein.
This antigenic protein is then processed and presented on the surface of the APCs in the context of major histocompatibility complex (MHC) molecules, allowing for the activation of antigen-specific T cells and initiation of an adaptive immune response.
The ability of mRNA vaccines to efficiently target and activate APCs in vivo has been demonstrated in studies using non-human primates like rhesus macaques.
These studies have shown that modified mRNA vaccines can effectively deliver their mRNA cargo to APCs, leading to robust antigen expression and subsequent activation of antigen-specific T cells.
Furthermore, the size of the lipid nanoparticles used for mRNA delivery plays a crucial role in their trafficking and uptake by APCs. Studies have shown that nanoparticles within a specific size range (around 20-100 nm) can efficiently drain into lymph nodes and penetrate the paracortex region, where APCs reside, facilitating their uptake and antigen presentation.
In summary, mRNA vaccines primarily target and deliver their mRNA cargo to antigen-presenting cells, particularly dendritic cells and macrophages, enabling the expression of the encoded antigenic protein and subsequent activation of antigen-specific T cells, which is a key mechanism underlying their immunogenicity.
Q2: How do mRNA vaccines escape endosomes?
mRNA vaccines are delivered into cells encapsulated in lipid nanoparticles (LNPs). For the mRNA to be effective, it must escape from the endosomes into the cytoplasm where it can be translated into proteins. The endosomal escape process is facilitated by the ionizable lipids in the LNP formulation:
Upon endocytosis, the LNP becomes trapped in an endosome, which is an acidic compartment within the cell
The acidic environment of the endosome causes the ionizable lipids in the LNP to become positively charged
This positive charge allows the LNP to interact with and disrupt the negatively charged endosomal membrane through electrostatic interactions
The shape change and disruption of the endosomal membrane enables the release of the mRNA cargo from the endosome into the cytoplasm
Optimizing the pKa values and lipid tail structures of the ionizable lipids can enhance their ability to become protonated and facilitate endosomal escape
Early endocytic compartments like recycling endosomes have been identified as key sites for productive mRNA escape, whereas prolonged retention in late endosomes leads to mRNA degradation
So in summary, the ionizable lipids undergo a charge-induced shape change in the acidic endosome that allows the LNP to disrupt the endosomal membrane and release its mRNA cargo into the cytosol for translation
mRNA vaccines are delivered into cells encapsulated in lipid nanoparticles (LNPs). For the mRNA to be effective, it must escape from the endosomes into the cytoplasm where it can be translated into proteins. The endosomal escape process is facilitated by the ionizable lipids in the LNP formulation:
Upon endocytosis, the LNP becomes trapped in an endosome, which is an acidic compartment within the cell
The acidic environment of the endosome causes the ionizable lipids in the LNP to become positively charged
This positive charge allows the LNP to interact with and disrupt the negatively charged endosomal membrane through electrostatic interactions
The shape change and disruption of the endosomal membrane enables the release of the mRNA cargo from the endosome into the cytoplasm
Optimizing the pKa values and lipid tail structures of the ionizable lipids can enhance their ability to become protonated and facilitate endosomal escape
Early endocytic compartments like recycling endosomes have been identified as key sites for productive mRNA escape, whereas prolonged retention in late endosomes leads to mRNA degradation
So in summary, the ionizable lipids undergo a charge-induced shape change in the acidic endosome that allows the LNP to disrupt the endosomal membrane and release its mRNA cargo into the cytosol for translation
Q3: What’s the structure of the ionizable lipid used for endosome escape with mRNA vaccines?
The ionizable lipids used in mRNA vaccines like the Pfizer and Moderna COVID-19 vaccines have a unique structure that facilitates endosomal escape and delivery of mRNA into cells.
Key Structural Features:
Ionizable Head Group: The lipids contain a protonatable or ionizable amino head group, typically a tertiary amine. This allows the lipid to be neutral at physiological pH but become positively charged in the acidic environment of endosomes.
Branched Lipid Tails: The lipids have branched or multi-tail lipid chains, rather than linear tails. This branched structure is believed to promote a cone-shaped molecular geometry that aids in endosomal membrane destabilization and escape.
Pfizer’s ALC-0315 has four short branched tails.
Moderna’s SM-102 has three branched tails, one of which is longer.
Ester Linkers: The lipid tails are often connected to the head group via ester linkers, which can influence the biodegradability and potency of the lipid.
The combination of the ionizable head group, branched tails, and ester linkers allows these lipids to effectively complex with mRNA, facilitate endosomal escape through the proposed “proton sponge” effect and membrane destabilization, and enable cytosolic delivery of the mRNA cargo.
Bonus historical science question: How did DNA nanotechnology start?
DNA nanotechnology originated from a landmark theoretical paper published in 1982 by Ned Seeman, which proposed the idea of using DNA molecules to create nanostructures.
This seminal work laid the foundation for the field and sparked the development of various techniques and applications in DNA nanotechnology.
The key idea proposed by Seeman was to exploit the predictable base-pairing interactions of DNA to construct artificial nanostructures with precise control over their shape and size.
He envisioned using synthetic DNA strands designed to self-assemble into desired geometries, taking advantage of DNA’s inherent programmability and molecular recognition capabilities.
The first experimental demonstration of this concept came in the early 1990s when Seeman and his colleagues successfully constructed DNA-based nanostructures, such as a cube and a truncated octahedron.
These initial achievements paved the way for the rapid growth of DNA nanotechnology, leading to the development of various techniques like DNA origami, DNA tiles, and DNA nanorobots.
Over the years, DNA nanotechnology has evolved into an interdisciplinary field, bringing together researchers from diverse areas like chemistry, physics, materials science, and computer science to explore the potential of DNA-based nanostructures for various applications, including nanoelectronics, biosensing, drug delivery, and molecular computing.
Steve’s take on Perplexity:
Steve said Perplexity was “pretty solid” but criticized it for a “lack of context, like if you’re in that field and wanting to understand a bigger picture based on a question, that isn’t there.”
But on a super reductionist view it seems pretty good. Like, what someone might give on an exam answer after studying, but not what someone who really knows a field should hopefully say to provide that bigger picture.
I don’t see anything “wrong” but someone who really knows their stuff would give a broader answer. This comes across as a pre-med major who’s really good at studying, and the tech details are accurate.
For the second two questions, if someone asked me this I would give a context of this solving a problem that had been around since 1960s, was solved by someone at UBC, how it was solved, why it matters. Things that provide way more depth.
Regarding the bonus historical question on the origins of DNA nanotechnology which he decided to ask, he said: “This is accurate but misses the nuances that one can only know from really diving into it. I wouldn’t expect AI to come up with that. Probably not possible. But the technical details are right.”
Hussain Dahodwala, PhD, Associate Professor at the IBBR Institute for Bioscience and Biotechnology Research and former Director, Upstream Process Development at the National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL)
Q1: What are the challenges in enzyme engineering and what advances are anticipated in enzyme engineering for biobetters?
The key challenges and advances in enzyme engineering for biobetters are:
Challenges in Enzyme Engineering: • Enzymes often have poor stability under industrial conditions, such as high temperatures, pressures, and organic solvents. This leads to poor product yields. • Enzymes may have narrow substrate specificity, limiting their ability to work with non-natural substrates. • Enzymes can be difficult to produce at scale in industrial fermentation processes.
Advances in Enzyme Engineering: • Enzyme engineering techniques like directed evolution, rational design, and semi-rational design are being used to improve enzyme properties such as activity, specificity, stability, and stereoselectivity. • Computational methods and machine learning are helping to expand the enzyme design space and accelerate the discovery of novel enzymes with desired properties. • Strategies like enzyme immobilization, molecular modifications, and the use of nanoparticle or cell-based delivery systems are being explored to enhance the stability and performance of enzyme therapeutics. • Advances in biotechnology and recombinant expression are enabling the industrial-scale production of engineered enzymes.
Overall, enzyme engineering is a rapidly advancing field that is critical for developing efficient, stable, and versatile biocatalysts to enable sustainable and cost-competitive bio-based processes.
Q2: What is the inspiration for Biomanufacturing 4.0 and how is it different from current biomanufacturing?
Biomanufacturing 4.0, also known as Pharma 4.0, is inspired by the broader Industry 4.0 concept and is focused on the digital transformation of biomanufacturing processes. The key differences between Biomanufacturing 4.0 and current biomanufacturing practices are:
Increased use of advanced digital technologies: Biomanufacturing 4.0 leverages technologies like artificial intelligence, machine learning, digital twins, and advanced process control to enable greater process visibility, optimization, and predictive capabilities.
Emphasis on data-driven decision making: Rather than relying solely on empirical knowledge, Biomanufacturing 4.0 aims to incorporate data analytics and modeling to make more informed, data-driven decisions throughout the product lifecycle.
Shift towards continuous improvement: Biomanufacturing 4.0 promotes a culture of continuous improvement, enabled by the increased process understanding and agile regulatory frameworks that allow for more efficient implementation of process changes.
Enhanced supply chain robustness: The advanced process control and predictive capabilities of Biomanufacturing 4.0 can lead to improved supply chain reliability and the ability to better manage variability, ultimately enhancing product quality and availability.
Accelerated product development: The digital tools and data-driven approach of Biomanufacturing 4.0 can help compress product development timelines, allowing for faster access to life-changing therapies.
In summary, Biomanufacturing 4.0 is inspired by the broader Industry 4.0 vision and aims to transform biomanufacturing through increased digitalization, data-driven decision making, and a focus on continuous improvement and supply chain robustness.
Q3a: What are non-canonical redox co-factors and how do we use them in cell culture?
Non-canonical redox cofactors (NRCs) are chemical compounds that can serve as alternatives to the natural cofactors like NAD(P)+ and NAD(P)H in biological redox reactions. Some key points about NRCs: NRCs can overcome the limitations of natural cofactors, enabling specific delivery of reducing power outside of biological constraints and providing alternative reduction potentials to overcome thermodynamic barriers.
Examples of NRCs include 3-carbamoyl-1-phenethylpyridin-1-ium chloride (P2NA+) and 3-acetylpyridine adenine dinucleotide, which have been used in purified enzyme-based redox catalysis to increase economic viability and drive thermodynamically unfavorable reactions, respectively.
Researchers have developed methods to engineer microbes like E. coli to biosynthesize NRCs like nicotinamide mononucleotide (NMN+) to enable their full integration with renewable biomanufacturing processes. This involves developing growth-based screening platforms and exploring pathway combinations to achieve high intracellular NMN+ levels.
Using NRCs in cell culture can provide more economical and scalable biotransformation, as well as enable exquisite control and expand the range of chemistries accessible compared to natural cofactors. However, the impact of NRC accumulation on cell fitness must also be considered.
In summary, NRCs are emerging as important tools in cell-free biosynthesis and metabolic engineering to overcome the limitations of natural cofactors and enable new capabilities in biocatalysis and biomanufacturing
Perplexity did not attempt to answer the second part of the question, and Hussein made another attempt to get it to answer.
Q3b: How to use non canonical redox co-factors in cell culture?
Based on the search results, here is how non-canonical redox co-factors can be used in cell culture: Metabolites and post-translational modifications (PTMs) can regulate cell signaling and gene expression through non-canonical mechanisms. Metabolites can serve as co-factors for enzymes and modify proteins through PTMs like palmitoylation, glutathionylation, and acetylation. These PTMs can alter protein interactions, stability, and function, thereby aligning cellular processes with changes in metabolic state. For example, glutathionylation of the transcription factor NF-κB decreases its ability to bind DNA, linking redox state to gene expression. Histone modifications like lactylation and crotonylation can also regulate gene transcription in response to metabolic cues. Additionally, palmitoylation can increase protein stability by blocking ubiquitination and proteasomal degradation. Several studies have demonstrated the use of non-canonical redox co-factors to modulate cell signaling and death pathways in cell culture:
Dihydroartemisinin, a semi-synthetic derivative of artemisinin, induced ferroptosis in pancreatic cancer cells by altering redox homeostasis and lipid peroxidation.
The natural product 11-methoxytabersonine triggered necroptosis in glioblastoma cells through ROS generation, calcium signaling, and mitochondrial dysfunction.
Xanthohumol, a chalcone from hops, activated the Nrf2 antioxidant pathway while inhibiting NF-κB in pancreatic cancer cells, leading to reduced inflammation and oxidative stress.
Sulforaphane, an isothiocyanate from cruciferous vegetables, upregulated Nrf2 and antioxidant enzymes while downregulating NF-κB in breast cancer and macrophage cell lines.
In summary, non-canonical redox co-factors like metabolites and their PTMs can be leveraged in cell culture to modulate signaling pathways, gene expression, and cell death mechanisms in a targeted manner by exploiting their unique biochemical properties.
Hussain’s take:
These are questions that I regularly encounter in scientific conferences and use them to test graduate student knowledge. I can say that these questions can be answered by biotechnologists with a few years of experience.
The questions chosen were 2 parts each. One part would require” book learning” and the second part would require domain knowledge, some experience and “intelligence” in order to answer.
In all 3 examples: Perplexity AI did an acceptable job in text mining the first half of the question. The definitions and concepts asked were answered more accurately and with more clarity, including citation than one would expect humans to remember.
Complexity of jargon was increased with each question and in each case Perplexity AI was able to text mine the definitions. For reference, in my experience many grad students fumble at the third question.
However in every question, perplexity AI ignored the second half of the question beyond the definitions. In other words it does great at “What is” and fails at “how to”.
Separating the questions was attempted with the third question, and AI failed to give an answer.
Chris Bjornsson, PhD, Embedded 3D & Automated Microscopy Specialist at ZEISS.
Q1: How many tissue types are there in the human body?
There are four main types of tissues in the human body: epithelial, connective, muscle, and nervous tissues.
Epithelial Tissues Epithelial tissues are sheets of cells that cover exterior surfaces of the body, line internal cavities and passageways, and form glands. Their main functions include protection, secretion, absorption, and excretion.
Connective Tissues Connective tissues bind together and support other tissues. They include bone, cartilage, blood, adipose (fat), and fibrous connective tissues like tendons and ligaments. Their main functions are to bind, support, protect, insulate, store energy, and produce blood cells.
Muscle Tissues Muscle tissues are composed of cells called muscle fibers that have the ability to contract or shorten, allowing for movement. There are three types: skeletal (voluntary), smooth (involuntary), and cardiac (heart muscle).
Nervous Tissues Nervous tissues are composed of neurons and neuroglial cells that propagate electrochemical signals called nerve impulses, enabling communication within the body and the processing of responses. The different tissue types work together to perform the various functions required for the human body to operate and maintain homeostasis.
Q2: How many orders of light does it take to form a microscope image?
To form a microscope image, it takes at least two orders of light according to the Abbe theory of image formation. The key points are:
According to Ernst Abbe’s theory, for a specimen detail to be resolved in the microscope image, the objective lens must capture at least two orders of diffracted light from the specimen, such as the 0th order (undiffracted light) and the 1st order of diffraction.
The more diffracted orders that are captured by the objective lens, the more accurately the image will represent the original specimen details.
Using immersion oil between the objective lens and specimen allows more diffracted orders to be captured compared to air, resulting in higher resolution.
Blue light is diffracted at a smaller angle than red or green light, so objectives can capture more diffracted orders of blue light, further improving resolution.
So in summary, capturing just the 0th order undiffracted light is insufficient to form a resolved image. At minimum, the 0th and 1st diffracted orders must be captured by the objective lens to resolve specimen details according to Abbe’s theory.
Q3a: How does the wavelets model work to create a maximum intensity projection?
Wavelets are not directly used to create maximum intensity projections (MIPs) in medical imaging. MIPs are created by taking the maximum voxel value along a particular viewing direction and projecting it onto a 2D image plane. The wavelet transform is a mathematical tool that decomposes a signal (like an image) into different frequency components at different scales or resolutions. It can be useful for tasks like image denoising, compression, and feature extraction, but it is not the primary technique used for MIP rendering. MIPs are typically computed directly from the 3D volumetric image data by the following steps:
The viewing direction is chosen (e.g. axial, coronal, sagittal).
For each pixel in the 2D projection plane, the maximum voxel value is found along the corresponding ray through the 3D volume in the chosen viewing direction.
This maximum value is assigned to that pixel, creating the MIP image.
So in summary, while wavelets have many applications in image processing, the maximum intensity projection algorithm operates directly on the voxel data itself by finding the maximum value along rays through the volume. Wavelets are not an inherent part of the MIP rendering process.
Q3b: How does the wavelets model work to create an extended depth of focus?
The wavelet model works to create an extended depth of focus (EDF) image by combining in-focus regions from a stack of images captured at different focal planes using the wavelet transform. The key steps are:
A stack of images is acquired by capturing the same scene at different focal planes along the optical axis.
The wavelet transform is applied to each image in the stack, decomposing them into different frequency subbands at multiple scales.
For each spatial location and wavelet subband, the coefficient with the maximum absolute value across the image stack is selected. This selects the in-focus information from the stack.
The selected wavelet coefficients are used to reconstruct a composite wavelet representation, which corresponds to an EDF image containing the in-focus regions from across the stack.
Optional post-processing steps like enforcing local smoothness of the depth map and avoiding saturation/noise accumulation can further improve the results.
The key advantage of using wavelets is that they provide a multi-scale representation that can effectively isolate in-focus and out-of-focus regions based on their different frequency characteristics. The maximum selection rule across the stack allows combining the in-focus information into a single composite image with an extended depth of field. This wavelet-based approach, especially the complex wavelet variant, has demonstrated state-of-the-art performance for EDF microscopy imaging compared to other traditional algorithms.
Chris’ take:
I’m impressed! I even tried to mess it up on the 3rd question but it caught me. Fast, too. I’ll actually start using it and spreading the word!
There was one error, i should mention, when it comes to how many orders of light are needed to form an image…it can be any two orders, and doesn’t necessarily need to include the 0th order. It can be 0+1 or 1+2 or 2+3 etc… The first part of that answer was correct, but the final summary statement was wrong.
Takeaways
Perplexity isn’t perfect, and is likely better at answering “what is” type questions than “how to” type questions. If you want to get a lot of context behind the answers, you’ll likely have to dig a bit. However, in these experiments it did seem to be relatively accurate, with few obvious errors. Used with caution, it can make a solid research companion and speed up literature review considerably.
"Scientists are increasingly using AI, which means that you need to be planning for their AI-enabled future. You’ve heard of search engine optimization, but how much do you know about AI optimization? Don’t be intimidated, just partner with BioBM. We stay at the forefront of organic optimization techniques so that you can stay at the forefront of scientists’ product and service discovery. Contact us today."
Sites can use the HTML tag rel=”nofollow” to instruct search engines not to credit a link with any importance for the purposes of SEO
These instructions don’t carry authority: they are merely suggestions
Search engines, including Google, choose whether to listen to the nofollow suggestion or not
They generally do not listen to the suggestion
If you can generate contextually relevant backlinks from sites which use nofollow tags, go for it! You’ll likely get value from them regardless. Just don’t be spammy.
The History of HTML Link Relationship Tags
As the name implies, a link relationship tag provides context to search engines and other automated crawlers on the nature of the relationship between the source page and the destination page. Some very common ones which marketers may run into are rel=”sponsored”, which denotes links in sponsored content, rel=”ugc” which denotes links in user-generated content, and rel=”nofollow”, which is supposed to tell search engines to completely ignore a link. There are over 100 link relations recognized by the Internet Assigned Numbers Authority, however, most of which are somewhat arcane and not used by search engines in any way which would be meaningful to marketers.
Link relationship tags, AKA rel tags, came into being in 2005, largely in response to the need for a nofollow tag to combat the excessive blog, comment, and forum spam which was extremely prevalent through the 2000s. Nofollow was proposed by Google’s Matt Cutts and Blogger’s Jason Shellen. For a long time, because they didn’t have a better option, Google and other search engines treated nofollow tags as law. Not only would they give no SEO benefit to nofollow links, but for a long time Google wouldn’t even index them.
The Evolution of Nofollow
As blog and comment spam became less of an issue, and as search engines became much more powerful and able to understand context, nofollow and similar relationship tags became less important to the search engines. Google effectively said as much in an announcement on their Search Central Blog on September 10, 2019:
When nofollow was introduced, Google would not count any link marked this way as a signal to use within our search algorithms. This has now changed. All the link attributes—sponsored, ugc, and nofollow—are treated as hints about which links to consider or exclude within Search. We’ll use these hints—along with other signals—as a way to better understand how to appropriately analyze and use links within our systems.
Why not completely ignore such links, as had been the case with nofollow? Links contain valuable information that can help us improve search, such as how the words within links describe content they point at. Looking at all the links we encounter can also help us better understand unnatural linking patterns. By shifting to a hint model, we no longer lose this important information, while still allowing site owners to indicate that some links shouldn’t be given the weight of a first-party endorsement.
As stated in the post, as of March 1, 2020 Google changed the role of link relationship tags, making them suggestions (or, in Google’s words, “hints”) rather than rules.
Context Is Key
As search engines continue to become more intelligent and human-like in their understanding of context within content, life science SEO professionals need to pay greater attention to context. A nofollow backlink with just one or two sentences in a comment on a relevant Reddit post may be worth more than an entire guest post on a site with little other content relevant to your field. Focus on doing all the things which you should be doing anyway, regardless of whether the link is nofollow or not:
Post links only in relevant places
Contribute meaningfully to the conversation
Don’t be spammy
Keep your use of links to a minimum
Write naturally and use links naturally. Don’t force it.
Case: Laboratory Supply Network
Laboratory Supply Network started a backlinking campaign with BioBM in August 2023 which relied almost entirely on backlinks in comments from highly reputable websites (including Reddit, ResearchGate, and Quora), all of which use nofollow tags on their links. At the start of the campaign, their key rank statistics were:
Average rank: 26.08
Median rank: 14
% of terms in the top 10: 45.00% (63 out of 140)
% of terms in the top 3: 21.43% (30 out of 140)
Less than 8 months later, in March 2024, we had improve their search rank statistics massively:
Average rank: 17.54
Median rank: 7
% of terms in the top 10: 61.11% (88 out of 144)
% of terms in the top 3: 39.58% (57 out of 144)
Backlinking was not the only thing that Laboratory Supply Network was doing to improve its SEO – it has a longstanding and relatively consistent content generation program, for instance – but the big difference before and after was the backlink campaign (which, again, relied almost entirely on nofollow backlinks!) In the previous year, LSN’s search statistics didn’t improve nearly as much.
Conclusions
Backlinking has long been a key component of a holistic SEO strategy, and it remains just as important as ever. Links are an important signal telling Google and other search engines what content is relevant and important with regards to any particular topic. While many highly reputable sites use rel=”nofollow” to try to discourage link spam, most link spam is more effectively dealt with in other ways, such as manual, automated, or community-driven moderation. Google knows these other moderation tools have become more effective, and therefore allows itself to treat the nofollow tag as more of a hint than a rule. If you are performing SEO for your life science company, don’t avoid sites just because they use nofollow. You can achieve good results in spite of it.
"Looking to improve your search ranks and boost your organic lead generation? Work with BioBM. For over a decade, BioBM has been implementing proven SEO strategies that get our clients get to the top of the search ranks and stay there. Don’t wait. Start the conversation today."
On November 30th, 2022, the world witnessed a technological revolution that would forever alter the landscape of content generation and communication. It was a day that will be remembered as the birth of a digital entity that came to be known as “Chat Generative Pre-Trained Transformer,” or simply ChatGPT. Some likened it to a harbinger of a new era, while others couldn’t help but draw parallels with the fictional dystopian Artificial neural network-based conscious group mind and artificial general superintelligence system known as Skynet from the Terminator franchise.
OpenAI, the visionary organization behind this innovation, had unleashed ChatGPT onto the digital stage, equipped with an astounding arsenal of knowledge. This cutting-edge AI model had ingested a staggering 570 gigabytes of data from various sources across the internet, comprising a mind-boggling 300 billion words. It was the culmination of years of research and development, resulting in a language model capable of understanding and generating human-like text on a scale never before seen.
As ChatGPT took its first virtual breath, it marked the beginning of a new chapter in the world of life science copywriting and content creation. This AI juggernaut had one goal (for the sake of this blog post’s plot at least): to revolutionize the way businesses and individuals interacted with written content. Gone were the days of struggling to craft compelling copy, as ChatGPT was poised to become the ultimate tool for content creators worldwide.
The global response was swift and diverse. Some celebrated ChatGPT as a boon, an AI-driven ally that would transform the way we communicate and create content, making it more efficient and accessible. Others, however, raised concerns about the implications of such advanced technology, fearing the potential for misuse and its impact on employment in the creative industry. In today’s blog post, this is exactly what we’ll be discussing: The rise of different AI-based language models (including ChatGPT, Gemini, Phind, and more), their advantages, and more importantly, their limitations in the world of life sciences copywriting, to ultimately answer the question that all of us have been asking ourselves: Are AI-based Language Models the end of Life Sciences Copywriters?
No, they are not. And please excuse our unorthodox approach to this blog post, we know we should build things up throughout the post to keep you hooked, and deliver the final answer in the end, but our copywriters took it personally. Regardless, we’ll be pitting ourselves against ALMs in the most unbiased way possible: exposing the human limitations of using ALMs, before discussing ALMs’ limitations and why we believe they are still a long way from taking our jobs away.
Proper Prompt Engineering:
Put simply, the output of ALMs are as good as the input you give them. You need to design a prompt to feed into the ALM that is detailed, yet not convoluted, to get an optimal output, which might sound simple and straightforward, but it really isn’t. Here are a few tips to writing a solid prompt:
Know what your objective is. It might be obvious to you, but you need to make sure the ALM understands it too.
Provide contextual details.
Keep it clear and concise.
Provide the ALM with reference material (keep in mind the limitations of some ALMs, like ChatGPT, who only has access to information pre-september 2021).
Set constraints. (e.g: use references dating from 2015 forward)
Specify the desired format.
So next time you’re not satisfied with your ALM’s output, make sure you give it a solid prompt first. However, the question that now asks itself is, is a solid prompt still enough to produce life sciences copy that is on par with what you are writing? Let’s compare some ALM output to human-written copy on the website of a mid-sized CRO for their preclinical ophthalmology services.
Round 1
What they wrote: HEALTHY EYES TO ENVISION A CLEARER FUTURE
In the fourth century B.C, Plato wrote that light emanated from the eye, seizing objects in its ray, which allowed us to see them. Centuries later, our understanding of ocular anatomy and physiology has drastically improved. In addition, what was once perceived by humans of antiquity as a punishment from the gods, is today perceived as what it truly is: pathological processes that stem at the molecular level, and manifest all the way up to the systemic level.
Ophthalmic diseases have the potential to take away what many agree is the most important and valued of the 5 senses: our vision.
At […], we have a vision. A vision of a world where ophthalmic diseases are history, just like Plato’s extramission theory. With your pipelines, our expertise, and a wide selection of models with immediate availability, we believe that our vision might one day come true. Let’s start your study today.
The prompt we used:
I need you to write me the title and first paragraph (around 170 words in total) of a landing page copy. The company you are writing for is a CRO. The landing page at hand is an ophthalmology models landing page.
ChatGPT’s output:
Title: “Revolutionize Ophthalmic Research with […] Cutting-edge CRO Solutions”
Paragraph:
Welcome to […], your trusted partner in advancing ophthalmic research through unparalleled Contract Research Organization (CRO) services. As a leader in the field, we understand the critical importance of innovative models in unraveling the complexities of eye-related disorders and developing breakthrough therapies. Our Ophthalmology Models Landing Page is your gateway to a diverse range of meticulously designed preclinical models that precisely mimic the intricacies of ocular conditions. At […], we believe in driving scientific progress, and our collection of ophthalmic models stands at the forefront of transformative research. With a commitment to excellence, our team of experts collaborates with you to accelerate drug development, optimize trial outcomes, and propel your ophthalmology projects to new heights. Explore our cutting-edge models, elevate your research capabilities, and join us in shaping the future of ophthalmic medicine. […]: Pioneering CRO Solutions for Visionary Research.
Gemini’s output:
Title: Decoding the Future of Sight: […]’s Cutting-Edge Ophthalmology Models
Unveiling clarity in clinical trials. For decades, ophthalmology research has faced unique challenges – complex disease models, diverse patient populations, and the inherent fragility of the visual system. At […], we understand these nuances. That’s why we’ve pioneered a suite of next-generation ophthalmology models designed to unlock unparalleled precision and efficiency in your clinical trials.
From age-related macular degeneration to diabetic retinopathy, our advanced models encompass a spectrum of eye diseases. We leverage human iPSC-derived retinal organoids, mimetics, and in vivo models, each meticulously calibrated to deliver accurate, predictive data. With […]’s ophthalmology models, you can gain deeper insights into disease mechanisms, test novel therapies with confidence, and navigate the path to regulatory approval with clarity.
Dive deeper into our models and discover how […] can empower your vision for brighter tomorrow.
Welcome to […], your go-to partner for state-of-the-art ophthalmology models. With our team of seasoned professionals, we strive to push the boundaries of ophthalmology research, providing accurate and reliable models to drive progress. Unlike others, we offer a unique selling proposition (USP) that distinguishes us in the market. Our commitment to quality and efficiency ensures optimal results for our clients.
At […], we focus on a specific niche within ophthalmology, creating innovative models that significantly contribute to the understanding and treatment of eye diseases. Our models, born from years of rigorous research and development, are meticulously tested to ensure they provide valuable insights.
Join us today and experience the future of ophthalmology models. Whether you’re a researcher seeking advanced models for your studies or a clinician looking for reliable data, […] offers you the tools and resources needed to enhance patient care.
Discussion:
While both ChatGPT and Phind’s responses fell short in their execution, Gemini managed to provide a somewhat acceptable output. ChatGPT and Phind’s attempts were riddled with cliches and lacked depth, offering only surface-level information. Their copy resembled generic marketing material, failing to spark any real interest or present the information in a compelling way. In contrast, Gemini’s output, while not exceptional, avoided the pitfalls of the others. It conveyed the necessary information in a clear and concise manner, without resorting to overused phrases or basic wording.
Round 2
The prompt we used:
Write me a 150 word introduction to a blog post. The blog post will cover the use of vacuum in rotary evaporation. It will be published on the website of a company that sells rotary evaporators. Your target audience is lab managers, scientists that work in labs, or pretty much anyone who might be interested in purchasing a rotary evaporator.
ChatGPT’s output:
Welcome to our comprehensive guide on the vital role of vacuum in rotary evaporation! As lab managers and scientists, we understand the paramount importance of efficiency, precision, and reliability in laboratory processes. In this post, we delve into the intricate workings of rotary evaporators, shedding light on the indispensable function of vacuum in distillation and concentration procedures.
Whether you’re a seasoned researcher or new to the world of laboratory equipment, understanding the principles behind rotary evaporation is essential. We’ll explore how vacuum manipulation facilitates faster evaporation rates, lower boiling points, and ultimately, superior sample recovery. From solvent recovery to cannabinoid extraction and beyond, the utilization of vacuum technology revolutionizes the way we approach scientific experimentation.
Join us as we navigate through the ins and outs of vacuum-assisted rotary evaporation, empowering you to make informed decisions when selecting the ideal equipment for your laboratory needs.
Gemini’s output:
Title: Unveiling the Power of Vacuum: Optimizing Efficiency in Rotary Evaporation
The rotary evaporator, a cornerstone in countless labs, excels at solvent removal and sample concentration. But did you know that the magic truly lies in the silent partner – vacuum pressure? Mastering this aspect unlocks a new level of efficiency and precision in your rotary evaporation processes.
This blog post delves into the intricacies of vacuum in rotary evaporation. We’ll explore how reduced pressure dramatically lowers the boiling point of solvents, enabling faster evaporation at gentler temperatures. Discover how a well-optimized vacuum system safeguards your temperature-sensitive samples and facilitates the removal of stubborn high-boiling-point solvents.
We’ll also shed light on choosing the appropriate vacuum pump for your specific needs, ensuring seamless integration with your rotary evaporator. By the end, you’ll be equipped with the knowledge to leverage the full potential of your rotary evaporator and achieve optimal results in your research endeavors.
Phind V9’s output:
In the realm of laboratory science, rotary evaporators, or “rotovaps,” are indispensable tools for concentrating chemical and biological samples. These devices, found in nearly all laboratories across industries such as chemical, pharmaceutical, food, and environmental sectors, are essential for processes requiring sample concentration or solvent distillation. The use of vacuum in rotary evaporation is a critical feature that enhances the efficiency and versatility of these instruments. By creating a reduced internal pressure above a solution, vacuum technology lowers the boiling point of solvents, enabling their distillation at lower temperatures. This not only preserves the chemical structure of the solute or analyte but also allows for the recycling of solvents, reducing waste and expense. This blog post will delve into the intricacies of vacuum in rotary evaporation, exploring its role in enhancing sample concentration, solvent recovery, and the overall efficiency of laboratory processes. Whether you’re a lab manager, a scientist, or simply someone interested in the latest advancements in rotary evaporator technology, this article will provide valuable insights into how vacuum technology is revolutionizing the field of sample concentration and solvent recovery.
Discussion:
For this round, we’ll go over four parameters in each of the ALM’s responses: The tone, the depth, the target audience, and the overall copy:
ChatGPT’s output:
Tone: Whimsical and over the top
Depth: Provides an overview of the topic with emphasis on the importance of vacuum in rotary evaporation.
Target Audience: Appears to target a broad audience, from seasoned researchers to newcomers in the field.
Overall Copy: The copy could benefit from a more engaging tone to resonate better with the target audience; it lacks elements that would create a sense of urgency or excitement. All in all, it’s very superficial
Gemini’s output:
Tone: Engaging and slightly more informal.
Depth: Offers a detailed exploration of the role of vacuum in rotary evaporation, including information on choosing the right vacuum pump.
Target Audience: Likely aimed at researchers or lab technicians seeking to optimize their rotary evaporation processes.
Overall Copy: Compared to the other ALMs, this version resonates most clearly with the target audience compared to the others. It’s far more engaging, and has a more “human” ring to it. All in all, Gemini did a fair job, and the copy it produced is probably usable.
Phind V9’s output:
Tone: Formal and technical.
Depth: Provides a comprehensive overview of vacuum in rotary evaporation, covering its applications across various industries and its benefits in sample concentration and solvent recovery.
Target Audience: Appears to target professionals in the field of laboratory science, including lab managers and scientists.
Overall Copy: While informative, the last draft could benefit from a more dynamic tone to capture the reader’s attention.
Final Round:
For the final round, we asked every ALM to back-up the copy that it formulated in Round 2, with articles, papers, etc.
ChatGPT’s output:
As an AI language model, I don’t have direct access to my training data or know where it came from. However, I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. My knowledge is based on the data used to train me and my ability to generate language-based responses. I can generate text based on the prompts I receive and provide information and answer questions to the best of my knowledge and abilities. If you have any concerns about the information I provide, please feel free to let me know, and I’ll do my best to address them.
Science Direct – Rotary Evaporation: [invalid URL removed] This article from ScienceDirect discusses the boiling point principle and the role of reduced pressure in achieving lower boiling points for solvents.
Labsphere – A Guide to Rotary Evaporation: [invalid URL removed] This guide highlights the importance of a well-optimized vacuum system for protecting temperature-sensitive samples.
Vacuum Pump Selection:
Knf LabTech – Selecting a Rotary Vane Vacuum Pump: [invalid URL removed] This resource from a leading manufacturer provides guidance on choosing an appropriate vacuum pump for rotary evaporation based on specific needs.
Overall Benefits:
Royal Society of Chemistry – Rotary Evaporation: [invalid URL removed] The Royal Society of Chemistry emphasizes how mastering rotary evaporation techniques can lead to increased efficiency and improved research outcomes.
Phind V9’s output:
A key feature of PhindV9 is its integration of source attribution on its interface: everytime it provides you with an output, it automatically shares the list of sources that it used to provide you with said output (see image below), making it easier to fact-check everything it gives you.
Discussion:
When evaluating large language models for informative tasks, phindv9 stands out for its ability to provide users with direct links to the sources it uses in every response. This allows users to quickly verify the information and delve deeper if desired. While other models, like Gemini, may offer general links to resources (most of which had invalid URLs in our example), Phind V9’s focus on direct source attribution streamlines the research process. It’s important to note that not all models can provide this functionality, as evidenced by ChatGPT’s current limitations in incorporating real-world data.
Conclusion:
Lack of Nuance: The life sciences field thrives on precision and nuance. Technical vocabulary, complex concepts, and ethical considerations demand a depth of understanding that AI models, despite their vast data stores, often lack. This can lead to inaccurate or misleading copy, a potentially dangerous pitfall in a field where clarity is paramount.
The Human Touch: The best life science copywriting resonates with the audience. It speaks to their fears, hopes, and aspirations in a way that is both informative and emotionally engaging. This is where the human touch remains irreplaceable. AI can generate text, but it cannot infuse it with the empathy and understanding that a skilled copywriter can.
Creative Roadblocks: While AI excels at generating standard content formats, it often struggles with the truly creative. Brainstorming unique ideas, crafting compelling narratives, and breaking through creative roadblocks are still the domain of human ingenuity. AI can be a valuable tool in the process, but it is not a substitute for the human imagination.
Time needed to generate a good prompt: While ALMs offer the potential to save time on writing, using them effectively often requires some back-and-forth. You might need to refine your prompts and evaluate the outputs several times. This iterative process can be valuable, but consider the time investment. Ultimately, the question is this: is it more efficient to create a detailed prompt to get the desired results from the ALM, or to write the entire piece yourself?
Large, complex algorithms which pump out high volumes of decisions based in part on non-quantifiable inputs are almost inherently going to get things wrong sometimes. We see this as users of Google Search all the time: even when you provide detailed search queries, the top result might not be the best and not all of the top results might be highly relevant. It happens. We move on. That doesn’t mean the system is bad; it’s just imperfect.
Quality score in Google Ads has similar problems. It’s constantly making an incredibly high volume of decisions, and somewhere in the secret sauce of its algos it makes some questionable decisions.
Yes, Google Ads decided that a CTR of almost 50% was “below average”. This is not surprising.
If your quality score is low, there may be things you can do about it. Perhaps your ads aren’t as relevant to the search terms as they could be. Check the search terms that your ads are showing for. Does you ad copy closely align with those terms? Perhaps your landing page isn’t providing the experience Google wants. Is it quick to load? Mobile friendly? Relevant? Check PageSpeed Insights to see if there are things you can do to improve your landing page. Maybe your CTR actually isn’t all that high. Are you making good use of all the ad extensions?
But sometimes, as we see above, Google just thinks something is wrong when to our subjective, albeit professional, human experience everything seems just fine. That’s okay. Don’t worry about it. Ultimately, you shouldn’t be optimizing for quality score. It is a metric, not a KPI. You should be optimizing for things like conversions, cost per action (CPA), and return on ad spend (ROAS), all of which you should be able to optimize effectively even if your quality score seems sub-optimal.
"Want to boost your ROAS? Talk to BioBM. We’ll implement optimized Google Ads campaigns (and other campaigns!) that help meet your revenue and ROI goals, all without the inflated monthly fees charged by most agencies. In other words, we’ll deliver metrics that matter. Let’s get started."
We don’t think about run of site (ROS) ads frequently as we don’t often use them. We try to be very intentional with our targeting. However, we recently had an engagement where we were asked to design ads for a display campaign on a popular industry website. The goal of the campaign was brand awareness (also something to avoid, but that’s for another post). The client was engaging with the publisher directly. We recommended the placement, designed the ads, and provided them to the client, figuring that was a done job. The client later returned to us to ask for more ad sizes because the publisher came back to them suggesting run of site ads because the desired placement was not available.
Some background for those less familiar with display advertising
If you are familiar with placement-based display advertising, you can skip this whole section. For the relative advertising novices, I’ll explain a little about various ad placements, their nomenclature, and how ads are priced.
An ad which is much wider than it is tall is generally referred to as a billboard, leaderboard, or banner ad. These are referred to as such because their placement on webpages is often near the top, although that is far from universally true, and even where it is true they often appear lower on the page as well. In our example on the right, which is a zoomed-out screenshot of the Lab Manager website, we see a large billboard banner at the top of the website (outlined in yellow), multiple interstitial banners of various sizes (in orange) and a small footer banner (green) which was snapped to the bottom of the page while I viewed it.
An ad which is much taller than it is wide is known as a skyscraper, although ones which are particularly large and a bit thicker may be called portraits, and large ads with 1:2 aspect ratios (most commonly 300 x 600 pixels) are referred to as half page ads. Lab Manager didn’t have those when I looked.
The last category of ad sizes is the square or rectangle ads. These are ads which do not have a high aspect ratio; generally less than 2:1. We can see one of those highlighted in purple. There is also some confusing nomenclature here: a very common ad of size 300 x 250 pixels is called a medium rectangle but you’ll also sometimes see it referred to as an MPU, and no one actually knows the original meaning of that acronym. You can think of it as mid-page unit or multi-purpose unit.
As you see, there are many different placements and ad sizes and it stands to reason that all of these will perform differently! If we were paying for these on a performance basis, say with cost-per-click, the variability in performance between the different placements would be self-correcting. If I am interested in a website’s audience and I’m paying per click, then I [generally] don’t care where on the page the click is coming from. However, publishers don’t like to charge on a per-click basis! If you are a publisher, this makes a lot of sense. You think of yourself as being in the business of attracting eyeballs. Even though to some extent they are, publishers do not want to be in the business of getting people to click on ads. They simply want to publish content which attracts their target market. Furthermore, they definitely don’t want their revenues to be at the whims of the quality of ads which their advertisers post, nor do they want to have to obtain and operate complex advertising technology to optimize for cost per view (generally expressed as cost per 1000 views, or CPM) when their advertisers are bidding based on cost per click (CPC).
What are Run Of Site Ads and why should you be cautious of them?
You may have noticed that the above discussion of ad sizes didn’t mention run of site ads. That is because run of site ads are not a particular placement nor a particular size. What “run of site” means is essentially that your ad can appear anywhere on the publisher’s website. You don’t get to pick.
Think about that. If your ads can appear anywhere, then where are they appearing in reality? They are appearing in the ad inventory which no one else wanted to buy. Your ads can’t appear in the placements which were sold. They can only appear in the placements which were not sold. If your insertion order specifies run of site ads, you are getting the other advertisers’ leftovers.
That’s not to say that ROS ads are bad in all circumstances, nor that publisher-side ad salespeople who try to sell them are trying to trick you in any way. There is nothing malicious going on. In order to get value from ROS ads, you need to do your homework and negotiate accordingly.
How to get good value from ROS ads
Any worthwhile publisher will be able to provide averaged metrics for their various ad placements. If you look at their pricing and stats you may find something like this:
Ad Format
CTR
CPM
Multi-unit ROS
0.05%
$40
Billboard Banner
0.35%
$95
Medium Rectangle
0.15%
$50
Half Page
0.10%
$50
Leaderboard
0.10%
$45
These are made-up numbers from nowhere in particular, but they are fairly close to numbers you might find in the real world at popular industry websites. Your mileage may vary.
One good assumption is that if people aren’t clicking the ad, it means they’re not paying attention to it. There is no other reason why people would click one ad at a much higher rate than others. Averaged out over time, we cannot assume that the ads in those positions were simply better. Likewise, there would be no logical reason why the position of an ad alone would cause a person to be less likely to click on it aside from it not getting the person’s attention in the first place. This is why billboard banners have very high clickthrough rates (CTR): it’s the first thing you see at the top of the page. Publishers like to price large ads higher than smaller ads, but it’s not always the case that the larger ads have a higher CTR.
With that assumption, take the inventory offered and convert the CPM to CPC using the CTR. The math is simple: CPC = CPM / (1000 * CTR).
Ad Format
CTR
CPM
Effective CPC
Multi-unit ROS
0.05%
$40
$80
Billboard Banner
0.35%
$95
$27
Medium Rectangle
0.15%
$50
$33
Half Page
0.10%
$50
$50
Leaderboard
0.10%
$45
$45
By converting to CPC, you have a much more realistic and practical perspective on the value of an ad position.
Here, we see those really “cheap” run of site ads are actually the most expensive on a per click basis, and the billboard banner is the cheapest! Again, even for more nebulous goals like brand awareness, we can only assume that CTR is a proxy for audience attentiveness. Without eye tracking or mouse pointer tracking data, which publishers are highly unlikely to provide, CTR is the best attentiveness proxy we have.
With this information, you can make the case to the publisher to drop the price of their ROS ads. They might do it. They might not. Most likely, they’ll meet you somewhere in the middle. By making a metrics-driven case to them, however, you’ll be more likely to get the best deal they are willing to offer. (ProTip: If you’re not picky when your ads run, go to a few publishers with a low-ball offer a week or so until end of the month. Most publishers sell ads on a monthly basis, and if they haven’t sold all their inventory, you’ll likely be able to pick it up at a cut rate. They get $0 for any inventory they don’t sell. Just be ready to move quickly.)
The other situation in which ROS ads are useful and can be a good value are when you want to buy up all the ad inventory. Perhaps a highly relevant publisher has a highly relevant feature and that all ads up to an audience you want to saturate. You can pitch a huge buy of ROS ads which will soak up the remaining inventory for the period of time when that feature is running, and potentially get good placements at the ROS price. Just make sure you know what you’re buying and the publisher isn’t trying to sell their best placements on the side.
Lessons
Run of site ads aren’t all bad, but novice advertisers can end up blowing a bunch of money if they’re not careful.
Regardless of placement, always be mindful of the metrics of the ads you’re buying.
Even if your campaign goals are more attention-oriented than action-oriented, CPC is a good proxy for attentiveness.
"Want better ROI from your advertising campaigns? Contact BioBM. We’ll ensure your life science company is using the right strategies to get the most from your advertising dollars."