logo

Tag : life science marketing

What is Generative Engine Optimization and can life science marketers make use of it?

Everyone knows what search engine optimization (SEO) is, and many companies take great efforts to ensure they show up near the top of organic search results and benefit from the resulting traffic which comes at no unit cost. Traditional organic search results are slowly being replaced, however, with a lot of the focus being shifted to what Google calls a search generative experience (SGE; note that this is synonymous with AI Overview on Google Search, and the SGE is titled AI Overview on the search results page). It is widely accepted that as SGE becomes more prevalent, traffic to websites from legacy organic search results will decrease. This is due to two factors:

  • Fewer people will click on organic search links – or any links – when SGE is present.
  • The webpage links referenced in an SGE answer have lower clickthrough than standard organic search links.
Legacy organic search results are far less prominent on search engine result pages (SERPs) when SGE is present.

In other words, some searchers will see the answer provided by the AI overview, accept it as accurate and sufficient, and take no further action. These searchers who would have clicked through to something else in the past may simply not click on anything. The bounce rate of SERPs likely increases markedly when SGE is present. SGE also contains its own reference links, which will inevitably cannibalize some legacy organic search traffic. Data from FirstPageSage shows that the result is not dramatic (yet), but just the first link in the AI overview is already garnering 9.4% of clicks. While this compares to 39.8% for a top search position result or 42.9% for a rich snippet result when SGE results are not present, it still has to come from somewhere, and the FirstPageSage data shows SGE is now appearing on 31% of SERPs.

In this post, we’ll address what life science marketers can do, and should be doing, to address the new search paradigm of Generative Engine Optimization (GEO).

How Generative Engine Optimization and Search Engine Optimization Overlap

Luckily for search marketers, GEO and SEO have a lot of overlap. If you are doing well at optimizing for search, you are probably doing a fair job at optimizing for generative engines. A number of key SEO principles apply to GEO:

  • Perform keyword research to ensure you are addressing popular user queries and develop content targeting those keywords.
  • The content you create should be helpful, reliable content that demonstrates experience, expertise, authoritativeness, and trustworthiness (what Google calls E-E-A-T).
  • Ensure you are signaling the relevance of your content through optimization of on-site and on-page factors (copy, metadata, schema, etc.) for targeted keywords.
  • Further signal the relevance of your website and content through off-site link building.
  • Ensure all your content is getting indexed.

Increasing the quantity of content, using clear language, and using technical language when appropriate also improve performance in both generative and organic search results. Other practices to improve the authority of a page or domain such as backlinking almost certainly play a role in GEO as well, as search AIs pick up on these signals (if not directly, then through their own understanding of organic search ranks).

There is further overlap if your goal in creating content is to get it seen by the maximum number of people instead of solely driving traffic to your website. In that case, disseminate your content as much as possible. While AI Overviews are not citing Reddit and other discussion forums as much as they once did, the more places your content lives, the more of a chance you’ll have that the AI will cite one of them, especially if your website itself is not well-optimized.

How GEO and SEO Differ in Practice

Optimizing for GEO is akin to specifically optimizing for rich snippets: there is additional emphasis on the content itself vs. ancillary factors. You need to pay more attention to how you provide information.

A seminal preprint paper by Pranjal Aggarwal et al uploaded to arXiv in late 2023 which coined the term generative engine optimization investigated a number of factors which they believe might help optimize for inclusion in SGE. Note that this paper has yet to pass peer review and was subject to a lot of scrutiny by SEO professionals, most intricately by Tylor Hermanson of Sandbox SEO who gave a number of compelling reasons to believe the data may be overstated, but having read the paper and a number of critiques I still think the paper contains meaningful and actionable lessons. There are two figures in this paper which I believe summarize the most interesting and useful information:

Table 1 shows how different tactics affected results. They used a metric called position-adjusted word count to measure the performance of websites in SGE before and after various GEO methods. I am more interested in this because it is an objective determination as opposed to the subjective impression metric, which basically involves feeding results into GPT-3.5 and seeing what it thinks. We can see from the results that specific types of content addition – adding quotations, statistics, or citations – have a notable impact on the position-adjusted word count for those websites. I point those out specifically not only because they have the greatest impact (along with fluency optimization), but they are not things which would necessarily be considered important if the only consideration for content creation was SEO. All the others which they tested and found to be useful – speaking clearly, fluently, technically, and authoritatively – are things which good SEO copy already needs to do. The inclusion of quotations, statistics, and citations are simply additional content.

The other interesting lesson from this paper is that the most impactful GEO methods differ based on the topic of the content.

While I would like to see this data presented the other way around – what methods are the highest performing for each category – it still makes the point. It also suggests that scientific content may receive disproportionate benefit from fluency optimization and authoritativeness. Again, those are already things which you should be factoring into your copy.

Practical Steps Life Science Marketers Should Take for GEO

If you are looking to optimize for generative engines, first ensure you are doing everything required for good SEO, as outlined above in the section of how GEO and SEO overlap. That is 80% of the job. To reiterate:

  • Perform thorough keyword research to address popular and relevant queries
  • Write in a way which demonstrates experience, expertise, authoritativeness, and trustworthiness (EEAT)
  • Optimize of on-site and on-page factors (copy, metadata, schema, etc.) for targeted keywords to demonstrate relevance
  • Further demonstrate relevance through off-site link building
  • Stay on top of Google Search Console and ensure your content is getting indexed
  • Write more / longer content
  • Write clearly and use appropriate technical language considering the subject matter

To specifically optimize for generative search beyond normal SEO, make a note to cite your sources and include statistics and / or quotations when possible. That is the lowest-hanging fruit and where most life science marketers will be fine stopping. If you really want to deep dive into generative engine optimization, however, you can use a tool such as Market Brew’s AI Overviews Visualizer to investigate how search engines’ semantic analysis algorithms perform cluster analysis with your website content and see how content is grouped and related.

Since AI overviews decrease overall clickthrough rates, another consideration for some marketers may be getting their content into the AI overviews independent of whether the content is hosted on your website or not. In these situations, you should try to disseminate your content widely across high-reputation sources, particularly Reddit. While it is not cited in SGE as much as it used to be, having your content in multiple places still increases the probability your content will be used.

Product Companies: Don’t Forget Merchant Center Feeds

While our anecdotal data shows that shopping results aren’t yet being included much in the life sciences, they are occasionally included in other industries and it would not be surprising to see them included more frequently in the life sciences in the future. When shown, these shopping results are very prominent, so ensure your Merchant Center feeds are functioning, include as much of your product portfolio as possible, and are well optimized. (Product feed optimization is a topic for another day.)

Summary

If you want to improve the likelihood that your content will appear in AI overviews and those overviews will contain links to your website, start with SEO best practices. That will get you far in both legacy organic search, which still receives most clickthroughs, as well as in SGE. From there, ensure your content which is the target of optimization efforts cites sources and includes statistics and quotations. If you sell products, ensure you are making optimal use of product data feeds.

GEO is neither difficult nor rocket science. By taking a few relatively simple steps, you’ll improve the likelihood of being included in AI overviews.

As this is a complex and novel topic, we’ve included an FAQ below.

"Need to ensure you are found where scientists are looking? Contact BioBM. We stay on top of the trends, tools, and technologies necessary to ensure our clients can reliably be found by their target scientific audiences.

What are you waiting for? Work with BioBM and improve your demand generation."

FAQ

Is employing current SEO best practices sufficient for good ranking in generative search?

Helpful? Yes. Sufficient? It depends.

If your products and services are relatively niche, and the questions you seek to answer with your content are likewise niche, then current SEO best practices may be sufficient. If there is a lot of competition in your field, then you may need to incorporate GEO-specific best practices into your content creation.

You can think of this similarly to how you think about SEO. If you are optimizing for niche or longer-tail terms, you might not need to do as much as you will if competing for more major, high-traffic terms. The more competition, the more you’ll likely need to do to achieve the best results. If your terms are sufficiently competitive that you are not ranking well in organic search, you should definitely not presume that whatever you are doing for SEO will reliably land you in AI overviews.

If my website has high organic search ranks, will it perform well in SGE?

I’m not sure anyone has a clear answer to this, especially since the answer still seems to be changing rapidly. Many of the studies which exist on the topic are almost a year old (an eternity in AI time).

Taking things chronologically:

  • A January 2024 study by Authoritas using 1,000 terms found that “93.8% of generative links (in this dataset at least) came from sources outside the top-ranking organic domains. With only 4.5% of generative URLs directly matching a page 1 organic URL and only 1.6% showing a different URL from the same organic ranking domain.”
  • A January 2024 study from seoClarity looked at the top 3 websites suggested by SGE and compared them to just the top 3 organic results on the basis of domain only. In contrast with the Authoritas study, they found that only 31% of SGE results had no domains in common with the top 3 organic results, 44% of SGE results had 1 domain in common, 24% had two domains in common, and 1% had all three domains in common. This suggests much more overlap between generative and legacy organic results, but it should be noted that it was a much smaller study of only 66 keywords.
  • A January 2024 study from Varn Media, using a similar but less informative metric to Authoritas, they found 55% of SGE results had at least one link which was the same as a top-10 organic result on a given SERP. One result in the top 10 is a low bar. They did not publish the size of their study.
  • A February 2024 study from SE Ranking which looked at 100,000 keywords found that SGE included at least one link from the top 10 organic search results 85.5% of the time. I don’t like this very low-bar metric, but it’s how they measured.
  • A slightly more recent Authoritas study from March 2024 using 2,900 branded keywords showed that “62% of generative links […] came from sources outside the top 10 ranking organic domains. With only 20.1% of generative URLs directly matching a page 1 organic URL and only 17.9% showing a different URL from the same organic ranking domain.” Obviously branded terms are a very different beast, and it should be no surprise that SGE still references the brand / product in question when using branded terms.
  • SE Ranking repeated their 100k keyword study in June 2024 and found similar results to their February study: 84.72% of AI overviews included at least one link from the top 10 organic search results. Again, I don’t love this metric, but the fact that it was virtually unchanged five months after the original study is informative.
  • Another seoClarity study published in August 2024 found far more overlap between legacy organic results and SGE results. Their analysis of 36,000 keywords found that one or more of the top 10 organic web results appeared in the AI Overview 99.5% of the time and 77% of AI overviews referenced links exclusively from the top 10 organic web results. Furthermore, they found that “80% of the AI Overview results contain a link to one or more of the top 3 ranking results. And when looking at just the top 1 position, the AI Overview contained a link to it almost 50% of the time.”

The most recent seoClarity study, suggesting a much greater deal of overlap between organic web results and SGE links, tracks with my recent experiences. While I would ordinarily discount my personal experiences as anecdotal, in the face of wildly different and rapidly evolving data I find them to be a useful basis of reference.

How much could my organic search traffic be impacted by SGE?

No one has any reliable metrics for that yet. Right now, I would trust FirstPageSage when they say the impact of SGE is not yet substantial, although I view their classification of it being “minimal” with some skepticism.

A lot of people like to point to a “study” posted in Search Engine Land which found a decline in organic search traffic between 18% and 64%, but it should be noted that this is not a study at all. It is simply a model based almost entirely on assumptions, and therefore should be taken with a huge grain of salt. (Also, 18-64% is a not a narrow enough range to be particularly informative regardless.)

Is SEO still worth doing?

Absolutely, hands down, SEO is still worthwhile. Legacy organic search results still receive the majority of clickthroughs on SERPs. However, as AI continues to improve, you should expect diminishing returns, as more people get their answer from AI and take no further action. It is therefore important that whatever you need to get across is being fetched by AI and displayed in SGE – regardless of whether it leads to a click or not.

I heard there is a hack to get your products cited by generative AI more often. What’s up with that?

A paper by a pair of Harvard researchers originally posted to arXiv in April 2024 titled “Manipulating Large Language Models to Increase Product Visibility” generated a lot of interest by both AI researchers and marketers looking for a cheat code to easily generate demand without any unit cost for that demand. As the paper suggests, they did find that LLMs can be manipulated to inserting specific products when the LLM is providing product recommendations. It is unrealistic that this is going to be applicable by life science marketers, however. It is a trial-and-error method involving high-volume testing of random, nonsensical text sequences added to your product’s metadata. This means that it would be nearly impossible to test on anything other than an open-source LLM which you are running an instance of yourself (and therefore able to force the re-indexing of your own content with extremely high frequency).

Another paper submitted to arXiv in June 2024 by a team of researchers from ETH Zurich titled “Adversarial Search Engine Optimization for Large Language Models” found that LLMs are vulnerable to preference manipulation through:

  • Prompt injection (literally telling the LLM what to do within the content)
  • Discreditation (i.e. badmouthing the competition)
  • Plugin optimization (similar to the above, but guiding the LLMs to connect to a desired API from which it will then obtain information)

While preference manipulation is simpler and feasible to implement, the problem with any overtly black-hat optimization technique remains: by the time the method is found and published, LLM developers are well on their way to fixing it, making it a game of whack-a-mole which could potentially end up in your website finding itself on a blacklist. Remember when Google took action against unnatural link building and had marketers disavow links to their sites? That was not fun for many black-hat search marketers out there. BioBM never recommends black-hat tactics for both their impermanence, likelihood of backfiring, and ethical reasons. There’s plenty of good things you can focus on to enhance your search optimization (and generative engine optimization) while providing a better experience for all internet users.

Do Scientists Use AI / LLMs for Product Discovery?

There has been a lot of talk about AI optimization in the marketing world, much of which was spurred by the release of a preprint article published to arXiv (pdf) in September which demonstrated that LLMs could be manipulated to increase product visibility. There is even a term for optimizing for search engines: Generative Engine Optimization, or GEO. Of course, we are immediately interested in whether any of this is meaningful to marketers in the life sciences.

Our friends at Laboratory Supply Network recently beat us to the punch and asked Reddit’s Labrats community if they use LLMs to help them find scientific products. Good question! Apparently it is also one with a clear answer.

This is a relatively small poll, but the results are so skewed that it is likely that the result is telling. In this poll, 80% of scientists responded that they never use AI for product discovery: literally zero percent of the time! Another 14% barely ever use it. Only two respondents said they use it roughly 10% of the time or more, with one saying they use it more than half the time.

Some of the comments indicate that scientists simply don’t see any relative value in AI for scientific product discovery, or see much better value from other means of product discovery.

Comment
byu/LabSupNet from discussion
inlabrats
Comment
byu/LabSupNet from discussion
inlabrats

Another indicated that AI simply might not be helpful specifically within the scientific context.

Comment
byu/LabSupNet from discussion
inlabrats

Here is the full conversation in r/labrats:

Do you use LLMs / AI to get recommendations on lab products?
byu/LabSupNet inlabrats

Maybe there will be a day where scientists adopt AI for product discovery in meaningful numbers, but it seems we aren’t there yet.

"Want scientists to discover your products and services? Contact BioBM. Our efficient and forward-looking demand generation strategies give life science companies the edge to get ahead and stay ahead. The earlier you engage with us, the more we can help. Work with BioBM."

Don’t Stress About “Nofollow” Backlinks

TL;DR Summary

  • Sites can use the HTML tag rel=”nofollow” to instruct search engines not to credit a link with any importance for the purposes of SEO
  • These instructions don’t carry authority: they are merely suggestions
  • Search engines, including Google, choose whether to listen to the nofollow suggestion or not
  • They generally do not listen to the suggestion
  • If you can generate contextually relevant backlinks from sites which use nofollow tags, go for it! You’ll likely get value from them regardless. Just don’t be spammy.

The History of HTML Link Relationship Tags

As the name implies, a link relationship tag provides context to search engines and other automated crawlers on the nature of the relationship between the source page and the destination page. Some very common ones which marketers may run into are rel=”sponsored”, which denotes links in sponsored content, rel=”ugc” which denotes links in user-generated content, and rel=”nofollow”, which is supposed to tell search engines to completely ignore a link. There are over 100 link relations recognized by the Internet Assigned Numbers Authority, however, most of which are somewhat arcane and not used by search engines in any way which would be meaningful to marketers.

Link relationship tags, AKA rel tags, came into being in 2005, largely in response to the need for a nofollow tag to combat the excessive blog, comment, and forum spam which was extremely prevalent through the 2000s. Nofollow was proposed by Google’s Matt Cutts and Blogger’s Jason Shellen. For a long time, because they didn’t have a better option, Google and other search engines treated nofollow tags as law. Not only would they give no SEO benefit to nofollow links, but for a long time Google wouldn’t even index them.

The Evolution of Nofollow

As blog and comment spam became less of an issue, and as search engines became much more powerful and able to understand context, nofollow and similar relationship tags became less important to the search engines. Google effectively said as much in an announcement on their Search Central Blog on September 10, 2019:

When nofollow was introduced, Google would not count any link marked this way as a signal to use within our search algorithms. This has now changed. All the link attributes—sponsored, ugc, and nofollow—are treated as hints about which links to consider or exclude within Search. We’ll use these hints—along with other signals—as a way to better understand how to appropriately analyze and use links within our systems.

Why not completely ignore such links, as had been the case with nofollow? Links contain valuable information that can help us improve search, such as how the words within links describe content they point at. Looking at all the links we encounter can also help us better understand unnatural linking patterns. By shifting to a hint model, we no longer lose this important information, while still allowing site owners to indicate that some links shouldn’t be given the weight of a first-party endorsement.

As stated in the post, as of March 1, 2020 Google changed the role of link relationship tags, making them suggestions (or, in Google’s words, “hints”) rather than rules.

Context Is Key

As search engines continue to become more intelligent and human-like in their understanding of context within content, life science SEO professionals need to pay greater attention to context. A nofollow backlink with just one or two sentences in a comment on a relevant Reddit post may be worth more than an entire guest post on a site with little other content relevant to your field. Focus on doing all the things which you should be doing anyway, regardless of whether the link is nofollow or not:

  • Post links only in relevant places
  • Contribute meaningfully to the conversation
  • Don’t be spammy
  • Keep your use of links to a minimum
  • Write naturally and use links naturally. Don’t force it.

Case: Laboratory Supply Network

Laboratory Supply Network started a backlinking campaign with BioBM in August 2023 which relied almost entirely on backlinks in comments from highly reputable websites (including Reddit, ResearchGate, and Quora), all of which use nofollow tags on their links. At the start of the campaign, their key rank statistics were:

  • Average rank: 26.08
  • Median rank: 14
  • % of terms in the top 10: 45.00% (63 out of 140)
  • % of terms in the top 3: 21.43% (30 out of 140)

Less than 8 months later, in March 2024, we had improve their search rank statistics massively:

  • Average rank: 17.54
  • Median rank: 7
  • % of terms in the top 10: 61.11% (88 out of 144)
  • % of terms in the top 3: 39.58% (57 out of 144)

Backlinking was not the only thing that Laboratory Supply Network was doing to improve its SEO – it has a longstanding and relatively consistent content generation program, for instance – but the big difference before and after was the backlink campaign (which, again, relied almost entirely on nofollow backlinks!) In the previous year, LSN’s search statistics didn’t improve nearly as much.

Conclusions

Backlinking has long been a key component of a holistic SEO strategy, and it remains just as important as ever. Links are an important signal telling Google and other search engines what content is relevant and important with regards to any particular topic. While many highly reputable sites use rel=”nofollow” to try to discourage link spam, most link spam is more effectively dealt with in other ways, such as manual, automated, or community-driven moderation. Google knows these other moderation tools have become more effective, and therefore allows itself to treat the nofollow tag as more of a hint than a rule. If you are performing SEO for your life science company, don’t avoid sites just because they use nofollow. You can achieve good results in spite of it.

"Looking to improve your search ranks and boost your organic lead generation? Work with BioBM. For over a decade, BioBM has been implementing proven SEO strategies that get our clients get to the top of the search ranks and stay there. Don’t wait. Start the conversation today."

Don’t Optimize for Quality Score in Google Ads

Sometimes you just have to let Google be Google.

Large, complex algorithms which pump out high volumes of decisions based in part on non-quantifiable inputs are almost inherently going to get things wrong sometimes. We see this as users of Google Search all the time: even when you provide detailed search queries, the top result might not be the best and not all of the top results might be highly relevant. It happens. We move on. That doesn’t mean the system is bad; it’s just imperfect.

Quality score in Google Ads has similar problems. It’s constantly making an incredibly high volume of decisions, and somewhere in the secret sauce of its algos it makes some questionable decisions.

Yes, Google Ads decided that a CTR of almost 50% was “below average”. This is not surprising.

If your quality score is low, there may be things you can do about it. Perhaps your ads aren’t as relevant to the search terms as they could be. Check the search terms that your ads are showing for. Does you ad copy closely align with those terms? Perhaps your landing page isn’t providing the experience Google wants. Is it quick to load? Mobile friendly? Relevant? Check PageSpeed Insights to see if there are things you can do to improve your landing page. Maybe your CTR actually isn’t all that high. Are you making good use of all the ad extensions?

But sometimes, as we see above, Google just thinks something is wrong when to our subjective, albeit professional, human experience everything seems just fine. That’s okay. Don’t worry about it. Ultimately, you shouldn’t be optimizing for quality score. It is a metric, not a KPI. You should be optimizing for things like conversions, cost per action (CPA), and return on ad spend (ROAS), all of which you should be able to optimize effectively even if your quality score seems sub-optimal.

"Want to boost your ROAS? Talk to BioBM. We’ll implement optimized Google Ads campaigns (and other campaigns!) that help meet your revenue and ROI goals, all without the inflated monthly fees charged by most agencies. In other words, we’ll deliver metrics that matter. Let’s get started."

Avoid CPM Run of Site Ads

Not all impressions are created equal.

We don’t think about run of site (ROS) ads frequently as we don’t often use them. We try to be very intentional with our targeting. However, we recently had an engagement where we were asked to design ads for a display campaign on a popular industry website. The goal of the campaign was brand awareness (also something to avoid, but that’s for another post). The client was engaging with the publisher directly. We recommended the placement, designed the ads, and provided them to the client, figuring that was a done job. The client later returned to us to ask for more ad sizes because the publisher came back to them suggesting run of site ads because the desired placement was not available.

Some background for those less familiar with display advertising

If you are familiar with placement-based display advertising, you can skip this whole section. For the relative advertising novices, I’ll explain a little about various ad placements, their nomenclature, and how ads are priced.

An ad which is much wider than it is tall is generally referred to as a billboard, leaderboard, or banner ad. These are referred to as such because their placement on webpages is often near the top, although that is far from universally true, and even where it is true they often appear lower on the page as well. In our example on the right, which is a zoomed-out screenshot of the Lab Manager website, we see a large billboard banner at the top of the website (outlined in yellow), multiple interstitial banners of various sizes (in orange) and a small footer banner (green) which was snapped to the bottom of the page while I viewed it.

An ad which is much taller than it is wide is known as a skyscraper, although ones which are particularly large and a bit thicker may be called portraits, and large ads with 1:2 aspect ratios (most commonly 300 x 600 pixels) are referred to as half page ads. Lab Manager didn’t have those when I looked.

The last category of ad sizes is the square or rectangle ads. These are ads which do not have a high aspect ratio; generally less than 2:1. We can see one of those highlighted in purple. There is also some confusing nomenclature here: a very common ad of size 300 x 250 pixels is called a medium rectangle but you’ll also sometimes see it referred to as an MPU, and no one actually knows the original meaning of that acronym. You can think of it as mid-page unit or multi-purpose unit.

As you see, there are many different placements and ad sizes and it stands to reason that all of these will perform differently! If we were paying for these on a performance basis, say with cost-per-click, the variability in performance between the different placements would be self-correcting. If I am interested in a website’s audience and I’m paying per click, then I [generally] don’t care where on the page the click is coming from. However, publishers don’t like to charge on a per-click basis! If you are a publisher, this makes a lot of sense. You think of yourself as being in the business of attracting eyeballs. Even though to some extent they are, publishers do not want to be in the business of getting people to click on ads. They simply want to publish content which attracts their target market. Furthermore, they definitely don’t want their revenues to be at the whims of the quality of ads which their advertisers post, nor do they want to have to obtain and operate complex advertising technology to optimize for cost per view (generally expressed as cost per 1000 views, or CPM) when their advertisers are bidding based on cost per click (CPC).

What are Run Of Site Ads and why should you be cautious of them?

You may have noticed that the above discussion of ad sizes didn’t mention run of site ads. That is because run of site ads are not a particular placement nor a particular size. What “run of site” means is essentially that your ad can appear anywhere on the publisher’s website. You don’t get to pick.

Think about that. If your ads can appear anywhere, then where are they appearing in reality? They are appearing in the ad inventory which no one else wanted to buy. Your ads can’t appear in the placements which were sold. They can only appear in the placements which were not sold. If your insertion order specifies run of site ads, you are getting the other advertisers’ leftovers.

That’s not to say that ROS ads are bad in all circumstances, nor that publisher-side ad salespeople who try to sell them are trying to trick you in any way. There is nothing malicious going on. In order to get value from ROS ads, you need to do your homework and negotiate accordingly.

How to get good value from ROS ads

Any worthwhile publisher will be able to provide averaged metrics for their various ad placements. If you look at their pricing and stats you may find something like this:

Ad FormatCTRCPM
Multi-unit ROS0.05%$40
Billboard Banner0.35%$95
Medium Rectangle0.15%$50
Half Page0.10%$50
Leaderboard0.10%$45
These are made-up numbers from nowhere in particular, but they are fairly close to numbers you might find in the real world at popular industry websites. Your mileage may vary.

One good assumption is that if people aren’t clicking the ad, it means they’re not paying attention to it. There is no other reason why people would click one ad at a much higher rate than others. Averaged out over time, we cannot assume that the ads in those positions were simply better. Likewise, there would be no logical reason why the position of an ad alone would cause a person to be less likely to click on it aside from it not getting the person’s attention in the first place. This is why billboard banners have very high clickthrough rates (CTR): it’s the first thing you see at the top of the page. Publishers like to price large ads higher than smaller ads, but it’s not always the case that the larger ads have a higher CTR.

With that assumption, take the inventory offered and convert the CPM to CPC using the CTR. The math is simple: CPC = CPM / (1000 * CTR).

Ad FormatCTRCPMEffective CPC
Multi-unit ROS0.05%$40$80
Billboard Banner0.35%$95$27
Medium Rectangle0.15%$50$33
Half Page0.10%$50$50
Leaderboard0.10%$45$45
By converting to CPC, you have a much more realistic and practical perspective on the value of an ad position.

Here, we see those really “cheap” run of site ads are actually the most expensive on a per click basis, and the billboard banner is the cheapest! Again, even for more nebulous goals like brand awareness, we can only assume that CTR is a proxy for audience attentiveness. Without eye tracking or mouse pointer tracking data, which publishers are highly unlikely to provide, CTR is the best attentiveness proxy we have.

With this information, you can make the case to the publisher to drop the price of their ROS ads. They might do it. They might not. Most likely, they’ll meet you somewhere in the middle. By making a metrics-driven case to them, however, you’ll be more likely to get the best deal they are willing to offer. (ProTip: If you’re not picky when your ads run, go to a few publishers with a low-ball offer a week or so until end of the month. Most publishers sell ads on a monthly basis, and if they haven’t sold all their inventory, you’ll likely be able to pick it up at a cut rate. They get $0 for any inventory they don’t sell. Just be ready to move quickly.)

The other situation in which ROS ads are useful and can be a good value are when you want to buy up all the ad inventory. Perhaps a highly relevant publisher has a highly relevant feature and that all ads up to an audience you want to saturate. You can pitch a huge buy of ROS ads which will soak up the remaining inventory for the period of time when that feature is running, and potentially get good placements at the ROS price. Just make sure you know what you’re buying and the publisher isn’t trying to sell their best placements on the side.

Lessons

  • Run of site ads aren’t all bad, but novice advertisers can end up blowing a bunch of money if they’re not careful.
  • Regardless of placement, always be mindful of the metrics of the ads you’re buying.
  • Even if your campaign goals are more attention-oriented than action-oriented, CPC is a good proxy for attentiveness.
"Want better ROI from your advertising campaigns? Contact BioBM. We’ll ensure your life science company is using the right strategies to get the most from your advertising dollars."

Can AI Replace Life Science / Laboratory Stock Images?

We’re over half a year into the age of AI, and its abilities and limitations for both text and image generation are fairly well-known. However, the available AI platforms have had a number of improvements over the past months, and have become markedly better. We are slowly but surely getting to the point where generative image AIs know what hands should look like.

But do they know what science looks like? Are they a reasonable replacement for stock images? Those are the meaningful questions if they are going to be useful for the purposes of life science marketing. We set to answer them.

A Few Notes Before I Start Comparing Things

Being able to create images which are reasonably accurate representations is the bare minimum for the utility of AI in replacing stock imagery. Once we move past that, the main questions are those of price, time, and uniqueness.

AI tools are inexpensive compared with stock imagery. A mid-tier stock imagery site such as iStock or ShutterStock will charge roughly $10 per image if paid with credits or anywhere from $7 to roughly a quarter per image if you purchase a monthly subscription. Of course, if you want something extremely high-quality, images from Getty Images or a specialized science stock photo provider like Science Photo Library or ScienceSource can easily cost many hundreds of dollars per image. In comparison, Midjourney’s pro plan, which is $60 / month, gives you 30 hours of compute time. Each prompt will provide you with 4 images and generally takes around 30 seconds. You could, in theory, acquire 8 images per minute, meaning each costs 0.4 cents. (In practice, with the current generation of AI image generation tools, you are unlikely to get images which match your vision on the first try.) Dall-E’s pricing is even simpler: each prompt is one credit, also provides 4 images, and credits cost $0.13 each. Stable Diffusion is still free.

Having used stock image sites extensively, and having spent some time playing around with the current AI offerings for purposes other than business, it’s not clear to me which is more convenient and takes less time. Sometimes you’ll get lucky and get a good AI image the first try, but you could say the same about stock image sites. Where AI eliminates the need to go through pages and pages of stock images to find the right one, it replaces that with tweaking prompts and waiting for the images to generate. It should be noted that there is some learning curve to using AI as well. For instance, telling it to give you a “film still” or “photograph” if you want a representation of real life which isn’t meant to look illustrated and cartoonish. There’s a million of these tricks and each system has its own small library of commands which helps to be familiar with so you can get an optimal output. Ultimately, AI probably does take a little bit more time, but it also requires more skill. Mindlessly browsing for stock images is still much easier than trying to get a good output from a generative AI (although playing with AI is usually more fun).

Where stock images simply can’t compete at all is uniqueness. When you generate an image with an AI, it is a unique image. Every image generated is one of one. You don’t get the “oh, I’ve seen this before” feeling that you get with stock images, which is especially prevalent for life science / laboratory topics given the relatively limited supply of scientific stock images. We will probably, at some point in the not too distant future, get past the point of being able to identify an AI image meant to look real by the naked eye. Stock images have been around for over a century and the uniqueness problem has only become worse. It is inherent to the medium. The ability to solve that problem is what excites me most about using generative AI imagery for life science marketing.

The Experiment! Ground Rules

If this is going to be an experiment, it needs structure. Here is how it is going to work.

The image generators & stock photo sites used will be:

I was going to include ShutterStock but there’s a huge amount of overlap with iStock, I often find iStock to have slightly higher-quality images, and I don’t want to make more of a project out of this than it is already going to be.

I will be performing 10 searches / generations. To allow for a mix of ideas and concepts, some will be of people, some will be of things, I’ll toss in some microscopy-like images, and some will be of concepts which would normally be presented in an illustrated rather than photographed format. With the disclaimer that these concepts are taken solely from my own thoughts in hope of trying to achieve a good diversity of concepts, I will be looking for the following items:

  1. A female scientist performing cell culture at a biosafety cabinet.
  2. An Indian male scientist working with an LC-MS instrument.
  3. An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
  4. A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
  5. An NGS instrument on a bench in a genomics lab.
  6. A high-magnification fluorescent micrograph of neural tissues.
  7. A colored scanning electron micrograph of carcinoma cells.
  8. A ribbon diagram of a large protein showing quaternary structure.
  9. A 3D illustration of plasmacytes releasing antibodies.
  10. An illustration of DNA methylation.

Such that nothing has an edge, none of these are things which I have recently searched for on stock image sites nor which I have previously attempted to generate using AI tools. Note that these are solely the ideas which I am looking for. These are not necessarily the exact queries used when generating AI images or searching the stock photo sites.

Looking for stock images and generating AI graphics are very different processes but they both share one critical dimension: time. I will therefore be limiting myself to 5 minutes on each platform for each image. That’s a reasonable amount of time to try to either find a stock image or get a decent output from an AI. It will also ensure this experiment doesn’t take me two days. Here we go…

Round 1: A female scientist performing cell culture at a biosafety cabinet.

One thing that AI image generators are really bad at in the context of the life sciences is being able to identify and reproduce specific things. I thought that this one wouldn’t be too hard because these models are in large part trained on stock images and there’s a ton of stock images of cell culture, many of which look fairly similar. I quickly realized that this was going to be an exercise in absurdity and hilarity when DALL-E gave me a rack of 50 ml Corning tubes made of Play-Doh. I would be doing you a grave disservice if I did not share this hilarity with you, so I’ll present not only the best images which I get from each round, but also the worst. And oh, there are so many.

I can’t withhold the claymation 50 ml Corning tubes from you. It would just be wrong of me.

I also realized that the only real way to compensate for this within the constraints of a 5-minute time limit is to mash the generate button as fast as I can. When your AI only has a vague idea of what a biosafety cabinet might look like and it’s trying to faithfully reproduce them graphically, you want it to be able to grasp at as many straws as possible. Midjourney gets an edge here because I can run a bunch of generations in parallel.

Now, without further ado, the ridiculous ones…

Round 1 AI Fails

Dall-E produced a large string of images which looked less like cell culture than women baking lemon bars.

Midjourney had some very interesting takes on what cell culture should look like. My favorite is the one that looks like something in a spaceship and involves only machines. The woman staring at her “pipette” in the exact same manner I am staring at this half-pipette half-lightsaber over her neatly arranged, unracked tubes is pretty good as well. Side note: in that one I specifically asked for her to be pipetting a red liquid in a biosafety cabinet. It made the gloves and tube caps red. There is no liquid. There is no biosafety cabinet.

For those who have never used it, Stable Diffusion is hilariously awful at anything meant to look realistic. If you’ve ever seen AI images of melted-looking people with 3 arms and 14 fingers, it was probably Stable Diffusion. The “best” it gave me were things that could potentially be biosafety cabinets, but when it was off, boy was it off…

Rule number one of laboratories: hold things with your mouth. (Yes we are obviously kidding, do not do that.)

That was fun! Onto the “successes.”

Round 1 AI vs. Stock

Midjourney did a wonderful job of creating realistic-looking scientists in labs that you would only see in a movie. Also keeping with the movie theme, Midjourney thinks that everyone looks like a model; no body positivity required. It really doesn’t want people to turn the lights on, either. Still, the best AI results, by a country mile, were from Midjourney.

The best Dall-E could do is give me something that you might confuse as cell culture at a biosafety cabinet if you didn’t look at it and were just looking past it as you turned your head.

Stable Diffusion’s best attempts are two things which could absolutely be biosafety cabinets in Salvador Dali world. Also, that scientist on the right may require medical attention.

Stock image sites, on the other hand, produce some images of cell culture in reasonably realistic looking settings, and it took me way less than 5 minutes to find each. Here are images from iStock, Getty Images, and Science Photo Library, in that order:

First round goes to the stock image sites, all of which produced a better result than anything I could coax from AI. Round goes to stock sites. AI 0 – 1 Stock.

Round 2: An Indian male scientist working with an LC-MS instrument.

I am not confident that AI is going to know what an LC-MS looks like. But let’s find out!

One notable thing that I found is that the less specific you become, the easier it gets for the AI. The below image was a response to me prompting Dall-E for a scientist working with an LC-MS, but it did manage to output a realistic looking person in an environment that could be a laboratory. It’s not perfect and you could pick it apart if you look closely, but it’s pretty close.

A generic prompt like “photograph of a scientist in a laboratory” might work great in Midjourney, or even Dall-E, but the point of this experiment would be tossed out the window if I set that low of a bar.

Round 2 AI Fails

Midjourney:

Dall-E:

Stable Diffusion is terrible. It’s difficult to tell the worst ones from the best ones. I was going to call one of these the “best” but I’m just going to put them all here because they’re all ridiculous.

Round 2 AI vs. Stock

Midjourney once again output the best results by far, and had some valiant efforts…

… but couldn’t match the real thing. Images below are from iStock, Getty Images, and Science Photo Library, respectively.

Once thing you’ve likely noticed is that none of these are Indian men! While we found good images of scientists performing LC-MS, we couldn’t narrow it down to both race and gender. Sometimes you have to take what you can get! We were generally able to find images which show more diversity, however, and it’s worth noting that Science Photo Library had the most diverse selection (although many of their images which I found are editorial use only, which is very limiting from a marketing perspective).

Round 2 goes to the stock sites. AI 0 – 2 Stock.

Round 3: An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.

This should be easier all around.

Side note: I should’ve predicted this, but as the original query merely asked for science, my initial Midjourney query made it look like the lab was presenting something out of a sci-fi game. Looked cool, but not what we’re aiming for.

Round 3 AI Fails

Dall-E presented some interesting science on the genetic structure of dog kibble.

Dall-E seemed to regress with these queries, as if drawing more than one person correctly was just way too much to ask. It produced a huge stream of almost Picasso-esque people presenting something that vaguely resembled things which could, if sufficiently de-abstracted, be scientific figures. It’s as if it knows what it wants to show you but is drawing it with the hands of a 2 year old.

Stable Diffusion is just bad at this. This was the best it could do.

Round 3 AI vs. Stock

Take the gloves off, this is going to be a battle! While Midjourney continued its penchant for lighting which is more dramatic than realistic, it produced a number of beautiful images with “data” that, while they are extravagant for a lab meeting, could possibly be illustrations of some kind of life science. A few had some noticeable flaws – even Midjourney does some weird stuff with hands sometimes – but they largely seem usable. After all, the intent here is as a replacement for stock images. Such images generally wouldn’t be used in a way which would draw an inordinate amount of attention to them. And if someone does notice a small flaw that gives it away as an AI image, is that somehow worse than it clearly being stock? I’m not certain.

Stock images really fell short here. The problem is that people taking stock photos don’t have data to show, so they either don’t show anyone presenting anything, or they show them presenting something which betrays the image as generic stock. Therefore, to make them look like scientists, they put them in lab coats. Scientists, however, generally don’t wear lab coats outside the lab. It’s poor lab hygiene. Put a group of scientists in a conference room and it’s unusual that they’ll all be wearing lab coats.

That’s exactly what iStock had. Getty Images had an image of a single scientist presenting, but you didn’t see the people he was presenting to. Science Photo Library, which has far less to choose from, also didn’t have people presenting visible data. The three comps are below:

Side Note / ProTip: You can find that image from Getty Images, as well as many images that Getty Images labels as “royalty free” on iStock (or other stock image sites) for way less money. Getty will absolutely fleece you if you let them. Do a reverse image search to find the cheapest option.

Considering the initial idea we wanted to convey, I have to give this round to the AI. The images are unique, and while they lack some realism, so do the stock images.

Round 3 goes to AI. AI 1 – 2 Stock.

Let’s see if Dall-E or Stable Diffusion can do better in the other categories.

Round 4: A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’ve seen nice stock imagery of this before. Let’s see if AI can match it, and if I can readily find it again on the stock sites.

Round 4 AI Fails

Dall-E had a long string of images which looked like everything shown was made entirely of polystyrene and put in the autoclave at too high a temperature. You might have to click to expand to see the detail. It looks like everything partially melted, but then resolidified.

Stable Diffusion is more diffuse than stable. Three of these are the best that it did while the fourth is when it gave up and just started barfing visual static.

This is the first round where Midjourney, in my opinion, didn’t do the best job. Liquid handling systems have a fair amount of variability in how they can be presented, but pipette tips do not, and it didn’t seem to know what pipette tips should look like, nor how they would be arranged in a liquid handling system. These are the closest it got:

Very pretty! Not very accurate.

Round 4 AI vs. Stock

We have a new contestant for the AI team! Dall-E produced the most realistic looking image. Here you have it:

Not bad! Could it be an automated pipetting system? We can’t see it, but it’s possible. The spacing between the tips isn’t quite even and it looks like PCR strips rather than a plate, but hey, a microplate wasn’t part of the requirements here.

Let’s see what I can dig up for stock… Here’s iStock, Getty, and SPL, respectively:

I didn’t get the drips I was looking for – probably needed to dig more for that – but we did get some images which are obviously liquid handling systems in the process of dispensing liquids.

As valiant of an effort as Dall-E had, the images just aren’t clean enough to have the photorealism of real stock images. Round goes to the stock sites. AI 1 – 3 Stock.

Round 5: An NGS instrument on a bench in a genomics lab.

I have a feeling the higher-end stock sites are going to take this, as there aren’t a ton of NGS instruments so it might be overly specific for AI.

Round 5 AI Fails

Both Midjourney and Dall-E needed guidance that a next-generation sequencer wasn’t some modular device used for producing techno music.

With Dall-E, however, it proved to not be particularly trainable. I imagine it’s AI mind thinking: “Oh, you want a genome sequencer? How about if I write it for you in gibberish?” That was followed by it throwing it’s imaginary hands in the air and generating random imaginary objects for me.

Midjourney also had some pretty but far-out takes, such as this thing which looks much more like an alien version of a pre-industrial loom.

Round 5 AI vs. Stock

This gets a little tricky, because AI is never going to show you a specific genome sequencer, not to mention that if it did you could theoretically run into trademark issues. With that in mind, you have to give them a little bit of latitude. Genome sequencers come in enough shapes and sizes that there is no one-size-fits-all description of what one looks like. Similarly, there are few enough popular ones that unless you see a specific one, or its tell-tale branding, you might not know what it is. Can you really tell the function of one big gray plastic box from another just by looking at it? Given those constraints, I think Midjourney did a heck of a job:

There is no reason that a theoretical NGS instrument couldn’t look like any of these (although some are arguably a bit small). Not half bad! Let’s see what I can get from stock sites, which also will likely not want to show me logos.

iStock had a closeup photo of a Minion, which while it technically fits the description of what we were looking for, it doesn’t fit the intent. Aside from that it had a mediocre rendering of something supposed to be a sequencer and a partial picture of something rather old which might be an old Sanger sequencer?

After not finding anything at all on Getty Images, down to the wire right at the 5:00 mark I found a picture of a NovaSeq 6000. Science Photo Library had an image of an ABS SOLiD 4 on a bench in a lab with the lights off.

Unfortunately, Getty has identified the person in the image, meaning that even though you couldn’t ID the individual just by looking at the image, it isn’t suitable for commercial use. I’m therefore disqualifying that one. Is the oddly lit (and extremely expensive) picture of the SOLiD 4 or the conceptually off-target picture of the Minion better than what the AI came up with? I don’t think I can conclusively say either way, and one thing that I dislike doing as a marketer is injecting my own opinion where it shouldn’t be. The scientists should decide! For now, this will be a tie.

AI 1, Stock 3, Tie 1

Round 6: A high-magnification fluorescent micrograph of neural tissues.

My PhD is in neuroscience so I love this round. If Science Photo Library doesn’t win this round they should pack up and go home. Let’s see what we get!

Round 6 AI Fails

Dall-E got a rough, if not slightly cartoony, shape of neurons but never really coalesced into anything that looked like a genuine fluorescent micrograph (top left and top center in the image below). Stable Diffusion, on the other hand, was either completely off the deep end or just hoping that if it overexposed out-of-focus images enough that it could slide by (top right and bottom row).

Round 6 AI vs. Stock

Midjourney produced a plethora of stunning images. They are objectively beautiful and could absolutely be used in a situation where one only needed the concept of neurons rather than an actual, realistic-looking fluorescent micrograph.

They’re gorgeous, but they’re very obviously not faithful reproductions of what a fluorescent micrograph should look like.

iStock didn’t produce anything within the time limit. I found high-magnification images of neurons which were not fluorescent (probably colored TEM), fluorescent images of neuroblastomas (not quite right), and illustrations of neurons which were not as interesting as those above.

Getty Images did have some, but Science Photo Library had pages and pages of on-target results. SPL employees, you still have jobs.

A small selection from page 1 of 5.

AI 1, Stock 4, Tie 1

Round 7: A colored scanning electron micrograph of carcinoma cells.

This is another one where Science Photo Library should win handily, but there’s only one way to find out!

Round 7 AI Fails

None of the AI tools failed in such a spectacular way that it was funny. Dall-E produced results which suggested it almost understood the concept, although could never put it together. Here’s a representative selection from Dall-E:

… and from Stable Diffusion, which as expected was further off:

Round 7 AI vs. Stock

Midjourney actually got it, and if these aren’t usable, they’re awfully close. As with the last round, these would certainly be usable if you needed to communicate the concept of a colored SEM image of carcinoma cells more than you needed accurate imagery of them.

iStock didn’t have any actual SEM images of carcinomas which I could find within the time limit, and Midjourney seems to do just as good of a job as the best illustrations I found there:

Getty Images did have some real SEM images, but the ones of which I found were credited to Science Photo Library and their selection was absolutely dwarfed by SPL’s collection, which again had pages and pages of images of many different cancer cell types:

It just keeps going. There were 269 results.

Here’s where this gets difficult. On one hand, we have images from Midjourney which would take the place of an illustration and which cost me less than ten cents to create. On the other hand, we have actual SEM images from Science Photo Library that are absolutely incredible, not to mention real, but depending on how you want to use them, would cost somewhere in the $200 – $2000 range per photo.

To figure out who wins this round, I need to get back to the original premise: Can AI replace stock in life science marketing? These images are every bit as usable as the items from iStock. Are they as good as the images from SPL? No, absolutely not. But are marketers always going to want to spend hundreds of dollars for a single stock photo? No, absolutely not. There are times when it will be worth it, but many times it won’t be. That said, I think I have to call this round a tie.

AI 1, Stock 4, Tie 2

Round 8: A ribbon diagram of a large protein showing quaternary structure.

This is something that stock photo sites should have in droves, but we’ll find out. To be honest, for things like this I personally search for images with friendly licensing requirements on Wikimedia Commons, which in this case gives ample options. But that’s outside the scope of the experiment so on to round 8!

Round 8 AI Fails

I honestly don’t know why I’m still bothering with Stable Diffusion. The closest it got was something which might look like a ribbon diagram if you took a massive dose of hallucinogens, but it mostly output farts.

Dall-E was entirely convinced that all protein structures should have words on them (a universally disastrous yet hilarious decision from any AI image generator) and I could not convince it otherwise:

This has always baffled me, especially as it pertains to DALL-E, since it’s made by OpenAI, the creators of Chat GPT. You would think it would be able to at least output actual words, even if used nonsensically, but apparently we aren’t that far into the future yet.

Round 8 AI vs. Stock

While Midjourney did listen when I told it not to use words and provided the most predictably beautiful output, they are obviously not genuine protein ribbon diagrams. Protein ribbon diagrams are a thing with a very specific look, and this is not it.

I’m not going to bother digging through all the various stock sites because there isn’t a competitive entry from team AI. So here’s a RAF-1 dimer from iStock, and that’s enough for the win.

AI 1, Stock 5, Tie 2. At this point AI can no longer catch up to stock images, but we’re not just interested in what “team” is going to “win” so I’ll keep going.

Round 9: A 3D illustration of plasmacytes releasing antibodies.

I have high hopes for Midjourney on this. But first, another episode of “Stable Diffusion Showing Us Things”!

Round 9 AI Fails

Stable Diffusion is somehow getting worse…

DALL-E was closer, but also took some adventures into randomness.

Midjourney wasn’t initially giving me the results that I hoped for, so to test if it understood the concept of plasmacytes I provided it with only “plasmacytes” as a query. No, it doesn’t know what plasmacytes are.

Round 9 AI vs. Stock

I should just call this Midjourney vs. Stock. Regardless, Midjourney didn’t quite hit the mark. Plasmacytes have an inordinately large number of ways to refer to them (plasma cells, B lymphocytes, B cells, etc.) and it did eventually get the idea, but it never looked quite right and never got the antibodies right, either. It did get the concept of a cell releasing something, but those things look nothing like antibodies.

I found some options on iStock and Science Photo Library (shown below, respectively) almost immediately, and the SPL option is reasonably priced if you don’t need it in extremely high resolution, so my call for Midjourney has not panned out.

Stock sites get this round. AI 1, Stock 6, Tie 2.

Round 10: An illustration of DNA methylation.

This is fairly specific, so I don’t have high hopes for AI here. The main question in my mind is whether stock sites will have illustrations of methylation specifically. Let’s find out!

Round 10 AI Fails

I occasionally feel like I have to fight with Midjourney to not be so artistic all the time, but adding things like “realistic looking” or “scientific illustration of” didn’t exactly help.

Midjourney also really wanted DNA to be a triple helix. Or maybe a 2.5-helix?

I set the bar extremely low for Stable Diffusion and just tried to get it to draw me DNA. Doesn’t matter what style, doesn’t need anything fancy, just plain old DNA. It almost did! Once. (Top left below.) But in the process it also created a bunch of abstract mayhem (bottom row below).

With anything involving “methylation” in the query, DALL-E did that thing where it tries to replace accurate representation with what it thinks are words. I therefore tried to just give it visual instructions, but that proved far too complex.

Round 10 AI vs. Stock

I have to admit, I did not think that it was going to be this hard to get reasonably accurate representations of regular DNA out of Midjourney. It did produce some, but not many, and the best looked like it was made by Jacob the Jeweler. If methyl groups look like rhinestones, 10/10. Dall-E did produce some things that look like DNA stock images circa 2010. All of these have the correct helix orientation as well: right handed. That was a must.

iStock, Getty Images, and Science Photo Library all had multiple options for images to represent methylation. Here are one from each, shown in the aforementioned order:

The point again goes to stock sites.

Final Score: AI 1, Stock 7, Tie 2.

Conclusion / Closing Thoughts

Much like generative text AI, generative image AI shows a lot of promise, but doesn’t yet have the specificity and accuracy needed to be broadly useful. It has a way to go before it can reliably replace stock photos and illustrations of laboratory and life science concepts for marketing purposes. However, for concepts which are fairly broad or in cases where getting the idea across is sufficient, AI can sometimes act as a replacement for basic stock imagery. As for me, if I get a good feeling that AI could do the job and I’m not enthusiastic about the images I’m finding from lower-cost stock sites, I’ll most likely give Midjourney a go. Sixty dollars a month gets us functionally infinite attempts, so the value here is pretty good. If we get a handful of stock images out of it each month, that’s fine – and there’s some from this experiment we’ll certainly be keeping on hand!

I would not be particularly comfortable about the future if I was a stock image site, but especially for higher-quality or specialized / more specific images, AI has a long ways to go before it can replace them.

"Want your products or brand to shine even more than it does in the AI mind of Midjourney? Contact BioBM and let’s have a chat!"

Google Ads Auto-Applied Recommendations Are Terrible

Unfortunately, Google has attempted to make them ubiquitous.

Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.

Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.

The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.

As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.

Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.

“Save time.” Is that what we’re doing?

Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.

A Select Few Auto-Apply Recommendations Are Worth Leaving On

Google has a total of 23 recommendation types. Of those, I always leave on:

  • Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
  • Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
  • Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.

A whole 3/24. Some others are situationally useful, however:

  • Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
  • Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.

That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.

If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.

"Even GPT-4 still gets the facts wrong a lot. Some things simply merit human expertise, and Google Ads is one of them. When advertising to scientists, you need someone who understands scientists and speaks their language. BioBM’s PhD-studded staff and deep experience in life science marketing mean we understand your customers better than any other agency – and understanding is the key to great marketing.

Why not leverage our understanding to your benefit? Contact Us."

How to Write a Life Science White Paper

From the perspective of the marketer, a critical early task in the life science buying journey is education. It may even come before your audience of scientists recognizes they have a problem which needs a product or service to solve it. Once you have piqued their interest and seeded an idea in their minds, you need a lot more to get them across the finish line. Sometimes, a longer-form method of communication is merited, and that’s where the white paper comes in.

The Life Science Buying Journey

For those who are relatively new to this website, it should be expressed that I’m largely an adherent to Hamid Ghanadan’s viewpoint of the scientific buying journey, which views scientists as inherently both curious and skeptical. It’s illustrated in detail in his excellent book Persuading Scientists which is well-deserving of the long-overdue shout out. I’ve captured some of the concepts in a previous post: “The Four Key Types of Content.” To give the oversimplified TL;DR version of both:

  • The default state of scientists is curious. They readily take in information.
  • As they take in new information, they form ideas about it and transition from being curious to being skeptical.
  • If they cannot validate the information, they generally reject it.

You can see how a buying journey fits into this mindset:

  • The scientist is presented with a new idea.
  • As they learn more about this idea, they realize that they may need a product or service.
  • The critically evaluate the product(s) / service(s) presented to them.
  • A decision is made.

The goal of the marketer is to seed the scientist’s curiosity, continuing to provide them with information which will shape their viewpoint in your favor without engaging skepticism too early. That is how you maximize your chances of a positive purchasing decision.

Understanding What a White Paper Is … and Isn’t

A white paper is intended to provide either educational content (helpful, customer-centric information) or validation content (information which verifies a belief that the customers hold or a claim that the brand is making which may be customer-centric or product-centric). In either situation, the primary purpose is to inform your audience. Novice marketers may consider the format (usually pdf) and conflate a white paper with a brochure but they are two very different things.

All marketing documents exist on a rhetorical sliding scale between being fully informational and fully promotional. A brochure would be far onto the promotional side of that scale; it is extremely product-centric and its purpose is largely to encourage a purchase. A white paper would be most of the way towards the informational side of that scale. Creating a white paper which is overly promotional risks engaging the scientists’ skepticism before they have adopted your viewpoint, creating a situation where their inclination is to disbelieve you. This situation generally results in them rejecting your offering.

Writing Copy for an Effective White Paper

Your white paper should be about:

  • a single topic
  • which is of interest to your audience
  • of which you know substantially more than your audience

This may seem simple, but framing it can be difficult.

Presumably, your company is in the business of solving some type of problems for life scientists. They might not know what their problem is, but you do. Why should they care? Why is what you are doing compelling? You almost certainly have answers to these questions, but you likely have them framed in the context of your product. How can you take those answers and communicate them in a manner which is customer-centric instead of product-centric? Start by talking about your scientist-customers’ problem rather than your solution and you’ll be headed in the right direction.

There are times when a more product-focused white paper can be appropriate, however. For instance, you may have a new technology which is unfamiliar to your audience and you need to educate them about it. In this case, you have to talk about your solution to some extent. When that is the case, be sure to focus on providing information about the technology, not promotion for the product. You need to take care to ensure the information is objective, communicated in a unbiased manner, is well-referenced with independent sources, and uses independent voices (e.g. voice of the customer) wherever an opinion is necessary.

Formatting a White Paper Effectively

There is no particular length restriction on a life science white paper, but if you are calling it a white paper, your audience is likely expecting it to be somewhat in depth. A two-page minimum for a white paper is a good guideline to adhere to. For much longer white papers, you should consider yourselves constrained by your ability to maintain your audience’s attention. Demonstrating your expertise does not mean writing more than you need to. As is almost always the case, less is more. Be as concise as you can while fully communicating your point.

Avoid walls of text. Too many words and not enough visuals will make your audience less likely to get through your content. Use illustrations where possible, and don’t feel bad using relevant stock imagery to break things up. Ensure the document isn’t boring to the eyes by using brand-relevant colors, shapes, iconography, and other visuals. Ideally, you should have a generalized white paper format which you maintain throughout all of your documents to provide consistency. You want people who read your white paper to know it is your brand’s white paper, even if they didn’t see a logo.

Circling back on what a white paper is and isn’t, you’ll recall that we need a primarily informational document. However, you might not want an entirely informational document. Your job is to sell things, and purely informational things are generally not great at selling. You want to sprinkle some promotion in there. But how? Through creative use of formatting! You don’t want people to become skeptical of the information you are providing them in the body of the white paper, so don’t put promotional content in the body of the white paper! Use clearly-delineated sections to cordon off your promotional content. Help prevent skepticism of your promotional messages by using voice-of-customer (testimonials, etc.) whenever possible. You can also leave your promotional messages to when customers will most expect it – the end of the document. Like almost all effective marketing documents, you don’t want to leave out the call-to-action!

This is a stock image of life science brochure templates and doesn’t say anything meaningful at all, but you probably stopped to look at them because they’re visually appealing.

Deploy Your White Paper Effectively

Far too often, life science companies will write a really good white paper then tuck them off in some remote corner of their website. You have it, use it! Post about it on social media (more than once!), put it somewhere on your website which is relevant but readily findable by anyone looking for that kind of information, and blast it out in an email to a well-segmented section of your audience. If appropriate, use it as the hook for a well-targeted paid advertising campaign. The worst thing you can do after spending the time and resources to create a white paper is to only have a few dozen people ever read it.

Presumably you’ll be using your white paper to generate leads and will therefore have it gated with a download form (although you certainly don’t have to). If it is gated, create a compelling download page for your white paper which previews just enough of the content to make the audience want more but without giving up its most important lessons.

Recap on Effective Life Science White Papers

To write an effective white paper:

  • Understand where your white paper fits within the customer journey.
  • Maintain its primarily informational purpose.
  • Keep to one topic which will be of interest to your audience.
  • Focus on information which most of your audience likely will not know.
  • Allow what you have to communicate to dictate the length.
  • Don’t skimp on the visuals.
  • Clearly separate any promotional messages to avoid creating skepticism about the core topic.
  • Shout it from the rooftops to get attention to it!

White papers are centerpieces of many life science demand generation campaigns. By understanding and implementing these guidelines, they can help drive successful lead generation for your life science company as well.

"Not sure how to best deploy content to help fuel your marketing efforts? Experiencing writer’s block? Don’t spend time fretting, just contact BioBM. Our life science marketing experts are here to help innovative companies like yours craft purposeful, effective content to influence your scientist-customers and encourage them into action."

Are You Providing Self-Service Journeys?

Customers are owning more of their own decisions.

We’ve all heard the data on how customers are delaying contact with salespeople and owning more of their own decision journeys. Recent research from Forrester predicts that the share of B2B sales, by dollar value, conducted via e-commerce will increase by about a third from 2015 to 2020: from 9.3% to 12.1%. Why does Forrester see this number growing at such a rate? Primarily due to “channel-shifting B2B buyers” – people that are willfully conducting purchases entirely online rather than going through a manned sales channel.

All this adds up to more control of the journey residing with the customers themselves and less opportunities for salespeople to influence them. Your marketing needs to accommodate these control-desiring customers. It needs to accommodate as much of the buying journey as it can, and in many instances it can and should accommodate the entire buying journey – digitally.

Scientist considering an online purchase

Accommodating Digital Buying Journeys

Planning for the enablement of self-service journeys is a complex, multi-step process. In brief, it consists of:

  1. Understanding the relevant customer personas. Defining customer personas is always a somewhat ambiguous task, but my advice to those doing it is always not to over-define them. It’s easy to achieve so much granularity that the process of defining a customer persona becomes meaningless due to the presence of far too many personas with far too little to distinguish their journeys in a practical sense. It’s okay to paint with a broad brush. For a relatively small industry such as ours, factors such as “level of influence on the purchasing decision” and “familiarity with the technology” are far better than the commonly used definitions of B2C demographics which you’ll likely see used if you look up examples of creating customer personas. It probably doesn’t much matter if the scientist you’re defining is a millennial or Gen X-er, nor do you likely need to account for the difference between scientists and senior scientists. That’s not what’s important. Focus on the critical factors, and clear your mind of everything else.
  2. Mapping the journey for each persona. This can be done with data analytics, market research, and / or simply as a good old-fashioned thought experiment, depending on your resources and capabilities as well as how accurate you need to be. If you’re using data, use the customers who converted as examples and trace their buying journeys from the beginning (which will probably have online and offline components). Bin them each into the appropriate persona then use them to inform what the journey requires for each persona. The market research approach is fairly straightforward and can be done with any combination of interviews, focus groups, and user testing approaches. If you’re on a budget and just want to sit down and brainstorm out the decision journey, start with each “raw” customer persona, then ask “where does this person want to go next in his decision journey?” A scientist may want more information, they may desire a certain experience, etc. Continue asking that question until you get to the point of purchase.
  3. Mapping information or experiences to each step of the journey. Once you know the layout of the journeys and the goals at each step, it should be relatively clear what you need to provide the customer at each step to get them to move forward in their journey. This step is really just asking: “How will we address their needs at each discrete step of their journey?”
  4. Determine the most appropriate channel for the delivery of each experience. You now know what you’re going to deliver to each customer at each point in the decision journey to keep them moving forward, but how you deliver it is important as well. On paper, it might seem as though you can simply provide all the information and experiences the customer needs in one sitting and then that’s all they will need to complete their decision journey. In practice, it often doesn’t work that way. Decisions often involve multiple stakeholders and often take place over the course of days, weeks, or months. Few B2B life science purchasing decisions are conducted on impulse. For young or less familiar brands you may also need time for the scientist to develop sufficient familiarity with the brand in order to be comfortable purchasing from you. This is the time where you must consider not only the structure of the buying journey, but the somewhat less tangible elements of its progression. Structured correctly, your roadmap should essentially remove steps from the buying journey for the customer.
  5. Implement it! You now know what the scientists’ decision journeys look like and exactly how you’ll address them. Bring that knowledge into the real world and create a holistic digital experience that enables completion of the self-serve buying journey!
  6. That’s it! Your marketing is now ready for today’s (and tomorrow’s) digitally-inclined buyers.

    Owning the JourneyNetwork internet brain head

    What we’ve outlined above will create a digital experience that allows customers to complete a purchasing decision on their own terms, which is something they increasingly want to do. If you build such an experience you will give yourself a definite advantage, but your customers will still shop around. It’s not enough to get them to hone in solely on your brand (which, if we’re being honest, is an incredibly difficult task).

    Digital marketing is not only capable of enabling your scientist-customers to complete their decision journeys on their own, however. It is possible to create a digital experience that owns a hugely disproportionate share of the decision journey to provide outsized influence upon it. Such mechanisms are called decision engines, and when properly implemented they provide their creators with massive influence on their markets. If you would like to learn more about decision engines, check out this recent podcast we did on the topic with Life Science Marketing Radio or download our report on the topic.

    "Is your life science brand adopting to the changing nature of scientists’ buying journeys? If you’re not well on your way to completing your marketing’s digital transformation, then it’s probably time to call BioBM. Not only do we have the digital skill set to develop transformational capabilities for our life science clients, but we stay one step ahead with our strategies. We live in an age of constant change, and we work to ensure that our clients aren’t simply following today’s best practices, but are positioned to be the leaders of tomorrow. We’ll provide you with the next generation of marketing strategies, which will not only elevate your products and services, but turn your marketing program into a strategic advantage. So what are you waiting for?"

Carlton Hoyt Discusses Decision Engines on Life Science Marketing Radio

Principal Consultant Carlton Hoyt recently sat down with Chris Conner for the Life Science Marketing Radio podcast to talk about decision engines, how they are transforming purchasing decisions, and what the implications are for life science marketers. The recording and transcript are below.

Transcript

CHRIS: Hello and welcome back. Thank you so much for joining us again today. Today we’re going to talk about decision engines. These are a way to help ease your customer’s buying process when there are multiple options to consider. So we’re going to talk about why that’s important and the considerations around deploying them. So if you offer lots and lots of products and customers have choices to make about the right ones, you don’t want to miss this episode.
(more…)