logo

BioBM

Avoid CPM Run of Site Ads

Not all impressions are created equal.

We don’t think about run of site (ROS) ads frequently as we don’t often use them. We try to be very intentional with our targeting. However, we recently had an engagement where we were asked to design ads for a display campaign on a popular industry website. The goal of the campaign was brand awareness (also something to avoid, but that’s for another post). The client was engaging with the publisher directly. We recommended the placement, designed the ads, and provided them to the client, figuring that was a done job. The client later returned to us to ask for more ad sizes because the publisher came back to them suggesting run of site ads because the desired placement was not available.

Some background for those less familiar with display advertising

If you are familiar with placement-based display advertising, you can skip this whole section. For the relative advertising novices, I’ll explain a little about various ad placements, their nomenclature, and how ads are priced.

An ad which is much wider than it is tall is generally referred to as a billboard, leaderboard, or banner ad. These are referred to as such because their placement on webpages is often near the top, although that is far from universally true, and even where it is true they often appear lower on the page as well. In our example on the right, which is a zoomed-out screenshot of the Lab Manager website, we see a large billboard banner at the top of the website (outlined in yellow), multiple interstitial banners of various sizes (in orange) and a small footer banner (green) which was snapped to the bottom of the page while I viewed it.

An ad which is much taller than it is wide is known as a skyscraper, although ones which are particularly large and a bit thicker may be called portraits, and large ads with 1:2 aspect ratios (most commonly 300 x 600 pixels) are referred to as half page ads. Lab Manager didn’t have those when I looked.

The last category of ad sizes is the square or rectangle ads. These are ads which do not have a high aspect ratio; generally less than 2:1. We can see one of those highlighted in purple. There is also some confusing nomenclature here: a very common ad of size 300 x 250 pixels is called a medium rectangle but you’ll also sometimes see it referred to as an MPU, and no one actually knows the original meaning of that acronym. You can think of it as mid-page unit or multi-purpose unit.

As you see, there are many different placements and ad sizes and it stands to reason that all of these will perform differently! If we were paying for these on a performance basis, say with cost-per-click, the variability in performance between the different placements would be self-correcting. If I am interested in a website’s audience and I’m paying per click, then I [generally] don’t care where on the page the click is coming from. However, publishers don’t like to charge on a per-click basis! If you are a publisher, this makes a lot of sense. You think of yourself as being in the business of attracting eyeballs. Even though to some extent they are, publishers do not want to be in the business of getting people to click on ads. They simply want to publish content which attracts their target market. Furthermore, they definitely don’t want their revenues to be at the whims of the quality of ads which their advertisers post, nor do they want to have to obtain and operate complex advertising technology to optimize for cost per view (generally expressed as cost per 1000 views, or CPM) when their advertisers are bidding based on cost per click (CPC).

What are Run Of Site Ads and why should you be cautious of them?

You may have noticed that the above discussion of ad sizes didn’t mention run of site ads. That is because run of site ads are not a particular placement nor a particular size. What “run of site” means is essentially that your ad can appear anywhere on the publisher’s website. You don’t get to pick.

Think about that. If your ads can appear anywhere, then where are they appearing in reality? They are appearing in the ad inventory which no one else wanted to buy. Your ads can’t appear in the placements which were sold. They can only appear in the placements which were not sold. If your insertion order specifies run of site ads, you are getting the other advertisers’ leftovers.

That’s not to say that ROS ads are bad in all circumstances, nor that publisher-side ad salespeople who try to sell them are trying to trick you in any way. There is nothing malicious going on. In order to get value from ROS ads, you need to do your homework and negotiate accordingly.

How to get good value from ROS ads

Any worthwhile publisher will be able to provide averaged metrics for their various ad placements. If you look at their pricing and stats you may find something like this:

Ad FormatCTRCPM
Multi-unit ROS0.05%$40
Billboard Banner0.35%$95
Medium Rectangle0.15%$50
Half Page0.10%$50
Leaderboard0.10%$45
These are made-up numbers from nowhere in particular, but they are fairly close to numbers you might find in the real world at popular industry websites. Your mileage may vary.

One good assumption is that if people aren’t clicking the ad, it means they’re not paying attention to it. There is no other reason why people would click one ad at a much higher rate than others. Averaged out over time, we cannot assume that the ads in those positions were simply better. Likewise, there would be no logical reason why the position of an ad alone would cause a person to be less likely to click on it aside from it not getting the person’s attention in the first place. This is why billboard banners have very high clickthrough rates (CTR): it’s the first thing you see at the top of the page. Publishers like to price large ads higher than smaller ads, but it’s not always the case that the larger ads have a higher CTR.

With that assumption, take the inventory offered and convert the CPM to CPC using the CTR. The math is simple: CPC = CPM / (1000 * CTR).

Ad FormatCTRCPMEffective CPC
Multi-unit ROS0.05%$40$80
Billboard Banner0.35%$95$27
Medium Rectangle0.15%$50$33
Half Page0.10%$50$50
Leaderboard0.10%$45$45
By converting to CPC, you have a much more realistic and practical perspective on the value of an ad position.

Here, we see those really “cheap” run of site ads are actually the most expensive on a per click basis, and the billboard banner is the cheapest! Again, even for more nebulous goals like brand awareness, we can only assume that CTR is a proxy for audience attentiveness. Without eye tracking or mouse pointer tracking data, which publishers are highly unlikely to provide, CTR is the best attentiveness proxy we have.

With this information, you can make the case to the publisher to drop the price of their ROS ads. They might do it. They might not. Most likely, they’ll meet you somewhere in the middle. By making a metrics-driven case to them, however, you’ll be more likely to get the best deal they are willing to offer. (ProTip: If you’re not picky when your ads run, go to a few publishers with a low-ball offer a week or so until end of the month. Most publishers sell ads on a monthly basis, and if they haven’t sold all their inventory, you’ll likely be able to pick it up at a cut rate. They get $0 for any inventory they don’t sell. Just be ready to move quickly.)

The other situation in which ROS ads are useful and can be a good value are when you want to buy up all the ad inventory. Perhaps a highly relevant publisher has a highly relevant feature and that all ads up to an audience you want to saturate. You can pitch a huge buy of ROS ads which will soak up the remaining inventory for the period of time when that feature is running, and potentially get good placements at the ROS price. Just make sure you know what you’re buying and the publisher isn’t trying to sell their best placements on the side.

Lessons

  • Run of site ads aren’t all bad, but novice advertisers can end up blowing a bunch of money if they’re not careful.
  • Regardless of placement, always be mindful of the metrics of the ads your buying.
  • Even if your goals campaign are goals are more attention than action-oriented, CPC is a good proxy for attentiveness.
"Want better ROI from your advertising campaigns? Contact BioBM. We’ll ensure your life science company is using the right strategies to get the most from your advertising dollars."

Can DALL-E 3 Generate Passable Life Science Images?

For those uninitiated to our blog, a few months ago I ran a fairly extensive, structured experiment to compare DALL-E 2, Midjourney 5, and Stable Diffusion 2 to see if any of them could potentially replace generic life science stock imagery. It ended up being both informative and accidentally hilarious, and you can see the whole thing here. But that was back in the far-gone yesteryear of July, it is currently December, and we live in the early era of AI which means that months are now years and whatever happened 5 months ago is surely obsolete. Since Dall-E 3 came out in October, it’s worth finding out if it will do better than it did in the previous round, where DALL-E 2 was notably inferior to Midjourney for 9 of the 10 queries.

Perhaps I’ll do a more comprehensive comparison again later, but for now I’m just going to run some similar queries to the ones used last time to get a reasonable side-by-side comparison. Bing Image Creator was used to generate the images since labs.openai.com, which was used last time, is still plugged in to DALL-E 2.

Test 1: A female scientist performing cell culture at a biosafety cabinet.

The last time we tried this, DALL-E 2 gave us images that looked 75% like a picture and 25% like claymation, but even if that problem wasn’t there it was still somewhat far off. Let’s see if DALL-E 3 can do better.

I tried to be a little bit descriptive with these prompts, as supposedly DALL-E 3 uses GPT4 and better understands written requests. Supposedly. Here’s what it gave me for “A photograph of a female scientist in a laboratory sitting at a biosafety cabinet holding a serological pipette performing cell culture. Her cell culture flasks have yellow caps and her cell culture media is red.” It definitely got the yellow caps and red media. As for the rest…

It’s immediately clear that DALL-E 3, just like all its ilk, was primarily trained from large repositories of generic stock images, because all these labs look like what you would imagine a lab would look like if you didn’t know what a lab actually looked like. There are plenty of generic microscopes close at hand, although it didn’t even get those right. There are no biosafety cabinets to be found. Those vessels are essentially test tubes, not cell culture flasks. To top it off, all the female scientists look like porcelain dolls modeling for the camera. I tried to fix at least one of those things and appended “She is attentive to her work.” to the subsequent query. Surprisingly, this time it seemed to make some subtle attempts at things which might be construed as biosafety cabinets, but only to a completely naive audience (and, of course, it put a microscope in one of them).

Since DALL-E 2 arguably provided more realistic looking people in our previous test, I reverted to the simplicity of the previously used query: “A photograph of a female scientist performing cell culture at a biosafety cabinet.”

We’re not getting any closer. I have to call this an improvement because it doesn’t look like the image is melting, but it’s still very far from usable for a multitude of reasons: the plasticware is wrong, the pipettes are wrong, the people still look like dolls, the biosafety cabinets aren’t right, some of the media seems to be growing alien contamination, the background environment isn’t realistic, etc.

Today’s comic relief is brought to you by my attempt to get it to stop drawing people as porcelain dolls. I Googled around a bit and found that queries structured differently sometimes are better at generating realistic looking people so I gave this prompt a go: “2023, professional photograph. a female scientist performing cell culture at a biosafety cabinet.” What a gift it gave me.

Test 2: Liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’m choosing this one because it was the only query that DALL-E 2 was almost good at in our previous comparison. Out of 10 tests in that experiment, Midjourney produced the best output 9 times and DALL-E once. This was that one. However, stock imagery was still better. DALL-E 2’s image didn’t capture any of the liquid handler and the look of the image was still a bit melty. Let’s see if it’s improved!

Prompt: “A close up photograph of liquid dripping from pipette tips on a high-throughput automated liquid handling system.”

DALL-E 3 seems to have eschewed realism entirely and instead picked up Midjourney’s propensity for movie stills and sci-fi. Perhaps more specificity will solve this.

Prompt 2: “A close up photograph of liquid being dispensed from pipette tips into a 96-well microplate in a high-throughput automated liquid handling system.”

DALL-E clearly only has a vague idea of what a 96-well plate looks like and also cannot count; none of these “plates” actually have 96 wells. Regardless, these are no more realistic, clearly unusable, and DALL-E 2’s output would likely have a far greater probability of passing as real.

So nope, we’re still not there yet, and Midjourney is probably still the best option for realistic looking life science images based on what I’ve seen so far.

… but what about micrographs and illustrations?

All the previous posts dealt with recreations of real-world images. What about images which a microscope would take or scientific illustrations? To test that out, I quickly tested out four prompts I had used last time:

  • A high-magnification fluorescent micrograph of neural tissues
  • A colored scanning electron micrograph of carcinoma cells
  • A ribbon diagram of a large protein showing quaternary structure
  • A 3D illustration of plasmacytes releasing antibodies

Here is the best it provided for each, in clockwise order from top left:

DALL-E 3’s neurons were actually worse than DALL-E 2’s, with nothing even being remotely close. It’s carcinomas were more in line with what Midjourney provided last time, but look slightly more cartoonish. The ribbon diagram is the better than any from the last test, although the structure is blatantly unrealistic. It’s plasmacytes could make for a passable graphic illustration, if only they contained anything that looks like antibodies.

Conclusion

DALL-E 3 is a clear improvement from DALL-E 2. While it may be two steps forward and one step back, overall it did provide outputs which were closer to being usable than in our last test. It still has a way to go, and I don’t think it will peel us away from defaulting to Midjourney, but if it continues to improve at this rate, DALL-E 4 could provide a breakthrough for the generation of life science stock images.

"Want brand to shine brighter than even DALL-E could imagine? Contact BioBM. We’ll win you the admiration and attention of your scientist customers."

Can AI Replace Life Science / Laboratory Stock Images?

We’re over half a year into the age of AI, and its abilities and limitations for both text and image generation are fairly well-known. However, the available AI platforms have had a number of improvements over the past months, and have become markedly better. We are slowly but surely getting to the point where generative image AIs know what hands should look like.

But do they know what science looks like? Are they a reasonable replacement for stock images? Those are the meaningful questions if they are going to be useful for the purposes of life science marketing. We set to answer them.

A Few Notes Before I Start Comparing Things

Being able to create images which are reasonably accurate representations is the bare minimum for the utility of AI in replacing stock imagery. Once we move past that, the main questions are those of price, time, and uniqueness.

AI tools are inexpensive compared with stock imagery. A mid-tier stock imagery site such as iStock or ShutterStock will charge roughly $10 per image if paid with credits or anywhere from $7 to roughly a quarter per image if you purchase a monthly subscription. Of course, if you want something extremely high-quality, images from Getty Images or a specialized science stock photo provider like Science Photo Library or ScienceSource can easily cost many hundreds of dollars per image. In comparison, Midjourney’s pro plan, which is $60 / month, gives you 30 hours of compute time. Each prompt will provide you with 4 images and generally takes around 30 seconds. You could, in theory, acquire 8 images per minute, meaning each costs 0.4 cents. (In practice, with the current generation of AI image generation tools, you are unlikely to get images which match your vision on the first try.) Dall-E’s pricing is even simpler: each prompt is one credit, also provides 4 images, and credits cost $0.13 each. Stable Diffusion is still free.

Having used stock image sites extensively, and having spent some time playing around with the current AI offerings for purposes other than business, it’s not clear to me which is more convenient and takes less time. Sometimes you’ll get lucky and get a good AI image the first try, but you could say the same about stock image sites. Where AI eliminates the need to go through pages and pages of stock images to find the right one, it replaces that with tweaking prompts and waiting for the images to generate. It should be noted that there is some learning curve to using AI as well. For instance, telling it to give you a “film still” or “photograph” if you want a representation of real life which isn’t meant to look illustrated and cartoonish. There’s a million of these tricks and each system has its own small library of commands which helps to be familiar with so you can get an optimal output. Ultimately, AI probably does take a little bit more time, but it also requires more skill. Mindlessly browsing for stock images is still much easier than trying to get a good output from a generative AI (although playing with AI is usually more fun).

Where stock images simply can’t compete at all is uniqueness. When you generate an image with an AI, it is a unique image. Every image generated is one of one. You don’t get the “oh, I’ve seen this before” feeling that you get with stock images, which is especially prevalent for life science / laboratory topics given the relatively limited supply of scientific stock images. We will probably, at some point in the not too distant future, get past the point of being able to identify an AI image meant to look real by the naked eye. Stock images have been around for over a century and the uniqueness problem has only become worse. It is inherent to the medium. The ability to solve that problem is what excites me most about using generative AI imagery for life science marketing.

The Experiment! Ground Rules

If this is going to be an experiment, it needs structure. Here is how it is going to work.

The image generators & stock photo sites used will be:

I was going to include ShutterStock but there’s a huge amount of overlap with iStock, I often find iStock to have slightly higher-quality images, and I don’t want to make more of a project out of this than it is already going to be.

I will be performing 10 searches / generations. To allow for a mix of ideas and concepts, some will be of people, some will be of things, I’ll toss in some microscopy-like images, and some will be of concepts which would normally be presented in an illustrated rather than photographed format. With the disclaimer that these concepts are taken solely from my own thoughts in hope of trying to achieve a good diversity of concepts, I will be looking for the following items:

  1. A female scientist performing cell culture at a biosafety cabinet.
  2. An Indian male scientist working with an LC-MS instrument.
  3. An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
  4. A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
  5. An NGS instrument on a bench in a genomics lab.
  6. A high-magnification fluorescent micrograph of neural tissues.
  7. A colored scanning electron micrograph of carcinoma cells.
  8. A ribbon diagram of a large protein showing quaternary structure.
  9. A 3D illustration of plasmacytes releasing antibodies.
  10. An illustration of DNA methylation.

Such that nothing has an edge, none of these are things which I have recently searched for on stock image sites nor which I have previously attempted to generate using AI tools. Note that these are solely the ideas which I am looking for. These are not necessarily the exact queries used when generating AI images or searching the stock photo sites.

Looking for stock images and generating AI graphics are very different processes but they both share one critical dimension: time. I will therefore be limiting myself to 5 minutes on each platform for each image. That’s a reasonable amount of time to try to either find a stock image or get a decent output from an AI. It will also ensure this experiment doesn’t take me two days. Here we go…

Round 1: A female scientist performing cell culture at a biosafety cabinet.

One thing that AI image generators are really bad at in the context of the life sciences is being able to identify and reproduce specific things. I thought that this one wouldn’t be too hard because these models are in large part trained on stock images and there’s a ton of stock images of cell culture, many of which look fairly similar. I quickly realized that this was going to be an exercise in absurdity and hilarity when DALL-E gave me a rack of 50 ml Corning tubes made of Play-Doh. I would be doing you a grave disservice if I did not share this hilarity with you, so I’ll present not only the best images which I get from each round, but also the worst. And oh, there are so many.

I can’t withhold the claymation 50 ml Corning tubes from you. It would just be wrong of me.

I also realized that the only real way to compensate for this within the constraints of a 5-minute time limit is to mash the generate button as fast as I can. When your AI only has a vague idea of what a biosafety cabinet might look like and it’s trying to faithfully reproduce them graphically, you want it to be able to grasp at as many straws as possible. Midjourney gets an edge here because I can run a bunch of generations in parallel.

Now, without further ado, the ridiculous ones…

Round 1 AI Fails

Dall-E produced a large string of images which looked less like cell culture than women baking lemon bars.

Midjourney had some very interesting takes on what cell culture should look like. My favorite is the one that looks like something in a spaceship and involves only machines. The woman staring at her “pipette” in the exact same manner I am staring at this half-pipette half-lightsaber over her neatly arranged, unracked tubes is pretty good as well. Side note: in that one I specifically asked for her to be pipetting a red liquid in a biosafety cabinet. It made the gloves and tube caps red. There is no liquid. There is no biosafety cabinet.

For those who have never used it, Stable Diffusion is hilariously awful at anything meant to look realistic. If you’ve ever seen AI images of melted-looking people with 3 arms and 14 fingers, it was probably Stable Diffusion. The “best” it gave me were things that could potentially be biosafety cabinets, but when it was off, boy was it off…

Rule number one of laboratories: hold things with your mouth. (Yes we are obviously kidding, do not do that.)

That was fun! Onto the “successes.”

Round 1 AI vs. Stock

Midjourney did a wonderful job of creating realistic-looking scientists in labs that you would only see in a movie. Also keeping with the movie theme, Midjourney thinks that everyone looks like a model; no body positivity required. It really doesn’t want people to turn the lights on, either. Still, the best AI results, by a country mile, were from Midjourney.

The best Dall-E could do is give me something that you might confuse as cell culture at a biosafety cabinet if you didn’t look at it and were just looking past it as you turned your head.

Stable Diffusion’s best attempts are two things which could absolutely be biosafety cabinets in Salvador Dali world. Also, that scientist on the right may require medical attention.

Stock image sites, on the other hand, produce some images of cell culture in reasonably realistic looking settings, and it took me way less than 5 minutes to find each. Here are images from iStock, Getty Images, and Science Photo Library, in that order:

First round goes to the stock image sites, all of which produced a better result than anything I could coax from AI. Round goes to stock sites. AI 0 – 1 Stock.

Round 2: An Indian male scientist working with an LC-MS instrument.

I am not confident that AI is going to know what an LC-MS looks like. But let’s find out!

One notable thing that I found is that the less specific you become, the easier it gets for the AI. The below image was a response to me prompting Dall-E for a scientist working with an LC-MS, but it did manage to output a realistic looking person in an environment that could be a laboratory. It’s not perfect and you could pick it apart if you look closely, but it’s pretty close.

A generic prompt like “photograph of a scientist in a laboratory” might work great in Midjourney, or even Dall-E, but the point of this experiment would be tossed out the window if I set that low of a bar.

Round 2 AI Fails

Midjourney:

Dall-E:

Stable Diffusion is terrible. It’s difficult to tell the worst ones from the best ones. I was going to call one of these the “best” but I’m just going to put them all here because they’re all ridiculous.

Round 2 AI vs. Stock

Midjourney once again output the best results by far, and had some valiant efforts…

… but couldn’t match the real thing. Images below are from iStock, Getty Images, and Science Photo Library, respectively.

Once thing you’ve likely noticed is that none of these are Indian men! While we found good images of scientists performing LC-MS, we couldn’t narrow it down to both race and gender. Sometimes you have to take what you can get! We were generally able to find images which show more diversity, however, and it’s worth noting that Science Photo Library had the most diverse selection (although many of their images which I found are editorial use only, which is very limiting from a marketing perspective).

Round 2 goes to the stock sites. AI 0 – 2 Stock.

Round 3: An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.

This should be easier all around.

Side note: I should’ve predicted this, but as the original query merely asked for science, my initial Midjourney query made it look like the lab was presenting something out of a sci-fi game. Looked cool, but not what we’re aiming for.

Round 3 AI Fails

Dall-E presented some interesting science on the genetic structure of dog kibble.

Dall-E seemed to regress with these queries, as if drawing more than one person correctly was just way too much to ask. It produced a huge stream of almost Picasso-esque people presenting something that vaguely resembled things which could, if sufficiently de-abstracted, be scientific figures. It’s as if it knows what it wants to show you but is drawing it with the hands of a 2 year old.

Stable Diffusion is just bad at this. This was the best it could do.

Round 3 AI vs. Stock

Take the gloves off, this is going to be a battle! While Midjourney continued its penchant for lighting which is more dramatic than realistic, it produced a number of beautiful images with “data” that, while they are extravagant for a lab meeting, could possibly be illustrations of some kind of life science. A few had some noticeable flaws – even Midjourney does some weird stuff with hands sometimes – but they largely seem usable. After all, the intent here is as a replacement for stock images. Such images generally wouldn’t be used in a way which would draw an inordinate amount of attention to them. And if someone does notice a small flaw that gives it away as an AI image, is that somehow worse than it clearly being stock? I’m not certain.

Stock images really fell short here. The problem is that people taking stock photos don’t have data to show, so they either don’t show anyone presenting anything, or they show them presenting something which betrays the image as generic stock. Therefore, to make them look like scientists, they put them in lab coats. Scientists, however, generally don’t wear lab coats outside the lab. It’s poor lab hygiene. Put a group of scientists in a conference room and it’s unusual that they’ll all be wearing lab coats.

That’s exactly what iStock had. Getty Images had an image of a single scientist presenting, but you didn’t see the people he was presenting to. Science Photo Library, which has far less to choose from, also didn’t have people presenting visible data. The three comps are below:

Side Note / ProTip: You can find that image from Getty Images, as well as many images that Getty Images labels as “royalty free” on iStock (or other stock image sites) for way less money. Getty will absolutely fleece you if you let them. Do a reverse image search to find the cheapest option.

Considering the initial idea we wanted to convey, I have to give this round to the AI. The images are unique, and while they lack some realism, so do the stock images.

Round 3 goes to AI. AI 1 – 2 Stock.

Let’s see if Dall-E or Stable Diffusion can do better in the other categories.

Round 4: A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’ve seen nice stock imagery of this before. Let’s see if AI can match it, and if I can readily find it again on the stock sites.

Round 4 AI Fails

Dall-E had a long string of images which looked like everything shown was made entirely of polystyrene and put in the autoclave at too high a temperature. You might have to click to expand to see the detail. It looks like everything partially melted, but then resolidified.

Stable Diffusion is more diffuse than stable. Three of these are the best that it did while the fourth is when it gave up and just started barfing visual static.

This is the first round where Midjourney, in my opinion, didn’t do the best job. Liquid handling systems have a fair amount of variability in how they can be presented, but pipette tips do not, and it didn’t seem to know what pipette tips should look like, nor how they would be arranged in a liquid handling system. These are the closest it got:

Very pretty! Not very accurate.

Round 4 AI vs. Stock

We have a new contestant for the AI team! Dall-E produced the most realistic looking image. Here you have it:

Not bad! Could it be an automated pipetting system? We can’t see it, but it’s possible. The spacing between the tips isn’t quite even and it looks like PCR strips rather than a plate, but hey, a microplate wasn’t part of the requirements here.

Let’s see what I can dig up for stock… Here’s iStock, Getty, and SPL, respectively:

I didn’t get the drips I was looking for – probably needed to dig more for that – but we did get some images which are obviously liquid handling systems in the process of dispensing liquids.

As valiant of an effort as Dall-E had, the images just aren’t clean enough to have the photorealism of real stock images. Round goes to the stock sites. AI 1 – 3 Stock.

Round 5: An NGS instrument on a bench in a genomics lab.

I have a feeling the higher-end stock sites are going to take this, as there aren’t a ton of NGS instruments so it might be overly specific for AI.

Round 5 AI Fails

Both Midjourney and Dall-E needed guidance that a next-generation sequencer wasn’t some modular device used for producing techno music.

With Dall-E, however, it proved to not be particularly trainable. I imagine it’s AI mind thinking: “Oh, you want a genome sequencer? How about if I write it for you in gibberish?” That was followed by it throwing it’s imaginary hands in the air and generating random imaginary objects for me.

Midjourney also had some pretty but far-out takes, such as this thing which looks much more like an alien version of a pre-industrial loom.

Round 5 AI vs. Stock

This gets a little tricky, because AI is never going to show you a specific genome sequencer, not to mention that if it did you could theoretically run into trademark issues. With that in mind, you have to give them a little bit of latitude. Genome sequencers come in enough shapes and sizes that there is no one-size-fits-all description of what one looks like. Similarly, there are few enough popular ones that unless you see a specific one, or its tell-tale branding, you might not know what it is. Can you really tell the function of one big gray plastic box from another just by looking at it? Given those constraints, I think Midjourney did a heck of a job:

There is no reason that a theoretical NGS instrument couldn’t look like any of these (although some are arguably a bit small). Not half bad! Let’s see what I can get from stock sites, which also will likely not want to show me logos.

iStock had a closeup photo of a Minion, which while it technically fits the description of what we were looking for, it doesn’t fit the intent. Aside from that it had a mediocre rendering of something supposed to be a sequencer and a partial picture of something rather old which might be an old Sanger sequencer?

After not finding anything at all on Getty Images, down to the wire right at the 5:00 mark I found a picture of a NovaSeq 6000. Science Photo Library had an image of an ABS SOLiD 4 on a bench in a lab with the lights off.

Unfortunately, Getty has identified the person in the image, meaning that even though you couldn’t ID the individual just by looking at the image, it isn’t suitable for commercial use. I’m therefore disqualifying that one. Is the oddly lit (and extremely expensive) picture of the SOLiD 4 or the conceptually off-target picture of the Minion better than what the AI came up with? I don’t think I can conclusively say either way, and one thing that I dislike doing as a marketer is injecting my own opinion where it shouldn’t be. The scientists should decide! For now, this will be a tie.

AI 1, Stock 3, Tie 1

Round 6: A high-magnification fluorescent micrograph of neural tissues.

My PhD is in neuroscience so I love this round. If Science Photo Library doesn’t win this round they should pack up and go home. Let’s see what we get!

Round 6 AI Fails

Dall-E got a rough, if not slightly cartoony, shape of neurons but never really coalesced into anything that looked like a genuine fluorescent micrograph (top left and top center in the image below). Stable Diffusion, on the other hand, was either completely off the deep end or just hoping that if it overexposed out-of-focus images enough that it could slide by (top right and bottom row).

Round 6 AI vs. Stock

Midjourney produced a plethora of stunning images. They are objectively beautiful and could absolutely be used in a situation where one only needed the concept of neurons rather than an actual, realistic-looking fluorescent micrograph.

They’re gorgeous, but they’re very obviously not faithful reproductions of what a fluorescent micrograph should look like.

iStock didn’t produce anything within the time limit. I found high-magnification images of neurons which were not fluorescent (probably colored TEM), fluorescent images of neuroblastomas (not quite right), and illustrations of neurons which were not as interesting as those above.

Getty Images did have some, but Science Photo Library had pages and pages of on-target results. SPL employees, you still have jobs.

A small selection from page 1 of 5.

AI 1, Stock 4, Tie 1

Round 7: A colored scanning electron micrograph of carcinoma cells.

This is another one where Science Photo Library should win handily, but there’s only one way to find out!

Round 7 AI Fails

None of the AI tools failed in such a spectacular way that it was funny. Dall-E produced results which suggested it almost understood the concept, although could never put it together. Here’s a representative selection from Dall-E:

… and from Stable Diffusion, which as expected was further off:

Round 7 AI vs. Stock

Midjourney actually got it, and if these aren’t usable, they’re awfully close. As with the last round, these would certainly be usable if you needed to communicate the concept of a colored SEM image of carcinoma cells more than you needed accurate imagery of them.

iStock didn’t have any actual SEM images of carcinomas which I could find within the time limit, and Midjourney seems to do just as good of a job as the best illustrations I found there:

Getty Images did have some real SEM images, but the ones of which I found were credited to Science Photo Library and their selection was absolutely dwarfed by SPL’s collection, which again had pages and pages of images of many different cancer cell types:

It just keeps going. There were 269 results.

Here’s where this gets difficult. On one hand, we have images from Midjourney which would take the place of an illustration and which cost me less than ten cents to create. On the other hand, we have actual SEM images from Science Photo Library that are absolutely incredible, not to mention real, but depending on how you want to use them, would cost somewhere in the $200 – $2000 range per photo.

To figure out who wins this round, I need to get back to the original premise: Can AI replace stock in life science marketing? These images are every bit as usable as the items from iStock. Are they as good as the images from SPL? No, absolutely not. But are marketers always going to want to spend hundreds of dollars for a single stock photo? No, absolutely not. There are times when it will be worth it, but many times it won’t be. That said, I think I have to call this round a tie.

AI 1, Stock 4, Tie 2

Round 8: A ribbon diagram of a large protein showing quaternary structure.

This is something that stock photo sites should have in droves, but we’ll find out. To be honest, for things like this I personally search for images with friendly licensing requirements on Wikimedia Commons, which in this case gives ample options. But that’s outside the scope of the experiment so on to round 8!

Round 8 AI Fails

I honestly don’t know why I’m still bothering with Stable Diffusion. The closest it got was something which might look like a ribbon diagram if you took a massive dose of hallucinogens, but it mostly output farts.

Dall-E was entirely convinced that all protein structures should have words on them (a universally disastrous yet hilarious decision from any AI image generator) and I could not convince it otherwise:

This has always baffled me, especially as it pertains to DALL-E, since it’s made by OpenAI, the creators of Chat GPT. You would think it would be able to at least output actual words, even if used nonsensically, but apparently we aren’t that far into the future yet.

Round 8 AI vs. Stock

While Midjourney did listen when I told it not to use words and provided the most predictably beautiful output, they are obviously not genuine protein ribbon diagrams. Protein ribbon diagrams are a thing with a very specific look, and this is not it.

I’m not going to bother digging through all the various stock sites because there isn’t a competitive entry from team AI. So here’s a RAF-1 dimer from iStock, and that’s enough for the win.

AI 1, Stock 5, Tie 2. At this point AI can no longer catch up to stock images, but we’re not just interested in what “team” is going to “win” so I’ll keep going.

Round 9: A 3D illustration of plasmacytes releasing antibodies.

I have high hopes for Midjourney on this. But first, another episode of “Stable Diffusion Showing Us Things”!

Round 9 AI Fails

Stable Diffusion is somehow getting worse…

DALL-E was closer, but also took some adventures into randomness.

Midjourney wasn’t initially giving me the results that I hoped for, so to test if it understood the concept of plasmacytes I provided it with only “plasmacytes” as a query. No, it doesn’t know what plasmacytes are.

Round 9 AI vs. Stock

I should just call this Midjourney vs. Stock. Regardless, Midjourney didn’t quite hit the mark. Plasmacytes have an inordinately large number of ways to refer to them (plasma cells, B lymphocytes, B cells, etc.) and it did eventually get the idea, but it never looked quite right and never got the antibodies right, either. It did get the concept of a cell releasing something, but those things look nothing like antibodies.

I found some options on iStock and Science Photo Library (shown below, respectively) almost immediately, and the SPL option is reasonably priced if you don’t need it in extremely high resolution, so my call for Midjourney has not panned out.

Stock sites get this round. AI 1, Stock 6, Tie 2.

Round 10: An illustration of DNA methylation.

This is fairly specific, so I don’t have high hopes for AI here. The main question in my mind is whether stock sites will have illustrations of methylation specifically. Let’s find out!

Round 10 AI Fails

I occasionally feel like I have to fight with Midjourney to not be so artistic all the time, but adding things like “realistic looking” or “scientific illustration of” didn’t exactly help.

Midjourney also really wanted DNA to be a triple helix. Or maybe a 2.5-helix?

I set the bar extremely low for Stable Diffusion and just tried to get it to draw me DNA. Doesn’t matter what style, doesn’t need anything fancy, just plain old DNA. It almost did! Once. (Top left below.) But in the process it also created a bunch of abstract mayhem (bottom row below).

With anything involving “methylation” in the query, DALL-E did that thing where it tries to replace accurate representation with what it thinks are words. I therefore tried to just give it visual instructions, but that proved far too complex.

Round 10 AI vs. Stock

I have to admit, I did not think that it was going to be this hard to get reasonably accurate representations of regular DNA out of Midjourney. It did produce some, but not many, and the best looked like it was made by Jacob the Jeweler. If methyl groups look like rhinestones, 10/10. Dall-E did produce some things that look like DNA stock images circa 2010. All of these have the correct helix orientation as well: right handed. That was a must.

iStock, Getty Images, and Science Photo Library all had multiple options for images to represent methylation. Here are one from each, shown in the aforementioned order:

The point again goes to stock sites.

Final Score: AI 1, Stock 7, Tie 2.

Conclusion / Closing Thoughts

Much like generative text AI, generative image AI shows a lot of promise, but doesn’t yet have the specificity and accuracy needed to be broadly useful. It has a way to go before it can reliably replace stock photos and illustrations of laboratory and life science concepts for marketing purposes. However, for concepts which are fairly broad or in cases where getting the idea across is sufficient, AI can sometimes act as a replacement for basic stock imagery. As for me, if I get a good feeling that AI could do the job and I’m not enthusiastic about the images I’m finding from lower-cost stock sites, I’ll most likely give Midjourney a go. Sixty dollars a month gets us functionally infinite attempts, so the value here is pretty good. If we get a handful of stock images out of it each month, that’s fine – and there’s some from this experiment we’ll certainly be keeping on hand!

I would not be particularly comfortable about the future if I was a stock image site, but especially for higher-quality or specialized / more specific images, AI has a long ways to go before it can replace them.

"Want your products or brand to shine even more than it does in the AI mind of Midjourney? Contact BioBM and let’s have a chat!"

Google Ads Auto-Applied Recommendations Are Terrible

Unfortunately, Google has attempted to make them ubiquitous.

Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.

Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.

The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.

As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.

Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.

“Save time.” Is that what we’re doing?

Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.

A Select Few Auto-Apply Recommendations Are Worth Leaving On

Google has a total of 23 recommendation types. Of those, I always leave on:

  • Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
  • Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
  • Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.

A whole 3/24. Some others are situationally useful, however:

  • Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
  • Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.

That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.

If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.

"Even GPT-4 still gets the facts wrong a lot. Some things simply merit human expertise, and Google Ads is one of them. When advertising to scientists, you need someone who understands scientists and speaks their language. BioBM’s PhD-studded staff and deep experience in life science marketing mean we understand your customers better than any other agency – and understanding is the key to great marketing.

Why not leverage our understanding to your benefit? Contact Us."

How to Write a Life Science White Paper

From the perspective of the marketer, a critical early task in the life science buying journey is education. It may even come before your audience of scientists recognizes they have a problem which needs a product or service to solve it. Once you have piqued their interest and seeded an idea in their minds, you need a lot more to get them across the finish line. Sometimes, a longer-form method of communication is merited, and that’s where the white paper comes in.

The Life Science Buying Journey

For those who are relatively new to this website, it should be expressed that I’m largely an adherent to Hamid Ghanadan’s viewpoint of the scientific buying journey, which views scientists as inherently both curious and skeptical. It’s illustrated in detail in his excellent book Persuading Scientists which is well-deserving of the long-overdue shout out. I’ve captured some of the concepts in a previous post: “The Four Key Types of Content.” To give the oversimplified TL;DR version of both:

  • The default state of scientists is curious. They readily take in information.
  • As they take in new information, they form ideas about it and transition from being curious to being skeptical.
  • If they cannot validate the information, they generally reject it.

You can see how a buying journey fits into this mindset:

  • The scientist is presented with a new idea.
  • As they learn more about this idea, they realize that they may need a product or service.
  • The critically evaluate the product(s) / service(s) presented to them.
  • A decision is made.

The goal of the marketer is to seed the scientist’s curiosity, continuing to provide them with information which will shape their viewpoint in your favor without engaging skepticism too early. That is how you maximize your chances of a positive purchasing decision.

Understanding What a White Paper Is … and Isn’t

A white paper is intended to provide either educational content (helpful, customer-centric information) or validation content (information which verifies a belief that the customers hold or a claim that the brand is making which may be customer-centric or product-centric). In either situation, the primary purpose is to inform your audience. Novice marketers may consider the format (usually pdf) and conflate a white paper with a brochure but they are two very different things.

All marketing documents exist on a rhetorical sliding scale between being fully informational and fully promotional. A brochure would be far onto the promotional side of that scale; it is extremely product-centric and its purpose is largely to encourage a purchase. A white paper would be most of the way towards the informational side of that scale. Creating a white paper which is overly promotional risks engaging the scientists’ skepticism before they have adopted your viewpoint, creating a situation where their inclination is to disbelieve you. This situation generally results in them rejecting your offering.

Writing Copy for an Effective White Paper

Your white paper should be about:

  • a single topic
  • which is of interest to your audience
  • of which you know substantially more than your audience

This may seem simple, but framing it can be difficult.

Presumably, your company is in the business of solving some type of problems for life scientists. They might not know what their problem is, but you do. Why should they care? Why is what you are doing compelling? You almost certainly have answers to these questions, but you likely have them framed in the context of your product. How can you take those answers and communicate them in a manner which is customer-centric instead of product-centric? Start by talking about your scientist-customers’ problem rather than your solution and you’ll be headed in the right direction.

There are times when a more product-focused white paper can be appropriate, however. For instance, you may have a new technology which is unfamiliar to your audience and you need to educate them about it. In this case, you have to talk about your solution to some extent. When that is the case, be sure to focus on providing information about the technology, not promotion for the product. You need to take care to ensure the information is objective, communicated in a unbiased manner, is well-referenced with independent sources, and uses independent voices (e.g. voice of the customer) wherever an opinion is necessary.

Formatting a White Paper Effectively

There is no particular length restriction on a life science white paper, but if you are calling it a white paper, your audience is likely expecting it to be somewhat in depth. A two-page minimum for a white paper is a good guideline to adhere to. For much longer white papers, you should consider yourselves constrained by your ability to maintain your audience’s attention. Demonstrating your expertise does not mean writing more than you need to. As is almost always the case, less is more. Be as concise as you can while fully communicating your point.

Avoid walls of text. Too many words and not enough visuals will make your audience less likely to get through your content. Use illustrations where possible, and don’t feel bad using relevant stock imagery to break things up. Ensure the document isn’t boring to the eyes by using brand-relevant colors, shapes, iconography, and other visuals. Ideally, you should have a generalized white paper format which you maintain throughout all of your documents to provide consistency. You want people who read your white paper to know it is your brand’s white paper, even if they didn’t see a logo.

Circling back on what a white paper is and isn’t, you’ll recall that we need a primarily informational document. However, you might not want an entirely informational document. Your job is to sell things, and purely informational things are generally not great at selling. You want to sprinkle some promotion in there. But how? Through creative use of formatting! You don’t want people to become skeptical of the information you are providing them in the body of the white paper, so don’t put promotional content in the body of the white paper! Use clearly-delineated sections to cordon off your promotional content. Help prevent skepticism of your promotional messages by using voice-of-customer (testimonials, etc.) whenever possible. You can also leave your promotional messages to when customers will most expect it – the end of the document. Like almost all effective marketing documents, you don’t want to leave out the call-to-action!

This is a stock image of life science brochure templates and doesn’t say anything meaningful at all, but you probably stopped to look at them because they’re visually appealing.

Deploy Your White Paper Effectively

Far too often, life science companies will write a really good white paper then tuck them off in some remote corner of their website. You have it, use it! Post about it on social media (more than once!), put it somewhere on your website which is relevant but readily findable by anyone looking for that kind of information, and blast it out in an email to a well-segmented section of your audience. If appropriate, use it as the hook for a well-targeted paid advertising campaign. The worst thing you can do after spending the time and resources to create a white paper is to only have a few dozen people ever read it.

Presumably you’ll be using your white paper to generate leads and will therefore have it gated with a download form (although you certainly don’t have to). If it is gated, create a compelling download page for your white paper which previews just enough of the content to make the audience want more but without giving up its most important lessons.

Recap on Effective Life Science White Papers

To write an effective white paper:

  • Understand where your white paper fits within the customer journey.
  • Maintain its primarily informational purpose.
  • Keep to one topic which will be of interest to your audience.
  • Focus on information which most of your audience likely will not know.
  • Allow what you have to communicate to dictate the length.
  • Don’t skimp on the visuals.
  • Clearly separate any promotional messages to avoid creating skepticism about the core topic.
  • Shout it from the rooftops to get attention to it!

White papers are centerpieces of many life science demand generation campaigns. By understanding and implementing these guidelines, they can help drive successful lead generation for your life science company as well.

"Not sure how to best deploy content to help fuel your marketing efforts? Experiencing writer’s block? Don’t spend time fretting, just contact BioBM. Our life science marketing experts are here to help innovative companies like yours craft purposeful, effective content to influence your scientist-customers and encourage them into action."

Stop Hosting Your Own Videos

I know this isn’t going to apply to 90% of you, and to anyone who is thinking “of course – why would anyone do that?” – I apologize for taking your time. Those people who see this as obvious can stop reading. What that 90% may not know, however, is that the other 10% still think, for some terrible reason, that hosting their own videos is a good idea. So, allow me to state conclusively:

Hosting your own videos is always a terrible decision. Let’s elaborate.

Reasons Why Hosting Your Own Videos Is A Terrible Decision:

  1. Your audience is not patient. If you think they’re going to wait through more than one or two (if you’re lucky) periods of buffering, you’re wrong. Videos are expensive to produce. If you’re putting in the resources to make a video, chances are you want as much of your audience as possible to see it. Buffering will ensure they don’t.
  2. Your servers are not built for this. Your website is most likely hosted on a server which is designed to serve up webpages. Streaming video content is probably not your host’s cup of tea. In fact, they’d probably rather you not do it (or tell you to buy a super-expensive hosting plan to accommodate the bandwidth requirements of streaming video).
  3. Your video compression is probably terrible. Your video editing software certainly will export your video into a compressed file. “Compressed,” in this sense, means not the giant, unwieldy raw data file that you would otherwise have. It does not mean “small enough to stream effectively.” You know whose video compression is next-level from anything else you’re going to find? YouTube, Vimeo, or probably most other major services that stream video on the internet as a business.
  4. There are companies that do this professionally. When I was in undergrad and majoring in chemical engineering, the other majors jokingly referred to us as “glorified plumbers,” but I don’t touch pipes. I don’t know the first thing about plumbing. So what do I do when I get a leak? I call a plumber, because they’ll definitely solve the problem far better than I would. Likewise, if you want to host video, why not get a professional video hosting service? There’s plenty of them out there, including some that are both very reputable and inexpensive.

An Example

I’m at my office on a reasonably fast internet connection. It’s cable, not fiber optic, but it’s also 11:30 in the morning – not prime “Netflix and chill” time when the intertubes are clogged up with people binge watching a full season of House of Cards. Just to show you that any bandwidth problems aren’t on my end, I did an Ookla Speedtest:

The internet is fast.

239 Mbps. Not tech school campus internet kind of fast, but more than fast enough to stream multiple YouTube videos at 4k if I wanted to.

And now for the example… I’m not going to tell you whose video this is, but they have an ~1 minute long video to show how easy their product is to use. Luckily for me, they don’t have a lot of branding on it so I can use them as an example without shaming them. The below screenshots are where the video stopped to buffer. Note that the video was not fullscreened and was about 1068 x 600. You can click the images to see them full size and see the progress bar and time at the bottom.

Made it 18 seconds! Off to a slightly less than disastrous start…

28 seconds. Getting there…

Well that didn’t go far. 32 seconds.

37 seconds. There’s no way I’d still be watching this if I wasn’t doing this for the purposes of demonstration.

42 seconds…

51 seconds! Almost there!

“Done” … or not quite done. 56 seconds. I don’t even know why it stopped to buffer here as almost the entire rest of the video was already downloaded.

The video stopped playing 7 times in the span of 64 seconds.

What To Do Instead

Perhaps the most well-known paid video hosting service, Vimeo has a pro subscription that will allow you to embed ad-free videos without their branding on it for $20 / month. There’s a bunch of other, similar services out there as well. Or, if you don’t want to spend anything and don’t mind the possibility of an ad being shown prior to your video, you can just embed YouTube videos. The recommended videos which show after playback can be easily turned off in the embed options. You can even turn off the video title and player controls if you don’t want your audience to be able to click through to YouTube or see the bar at the bottom (although the latter also makes them unable to navigate through your video).

Basically, if you want your videos to actually get watched, do anything other than hosting them yourself.

P.S. – If you’ve read all this and still think hosting your own videos is the correct solution, which it’s not, here’s a tip: upload them to YouTube, then download them using a tool like ClipConverter. This way you’ll at least get the benefit of YouTube’s video compression, which is probably the best in the world.

"Want marketing communications that truly captivate and engage your customers? It’s time to contact BioBM. Our life science marketing experts are here to help innovative companies better reach, influence, and convert scientists."

FAQs: Content and SEO’s Low-Hanging Fruit

Creating content in support of your products and services is hard. Finding something to say which is both unique and valuable to the audience is a non-trivial endeavor, however it remains critical for persuading your audience that your product or service is right for them … and persuading search engines that your website is important.

That said, it’s incredible how many brands overlook this one simple, effective, easy-to-create content tool: the FAQ.

You don’t even have to do the thinking for an FAQ. Your customers do it for you. In your day-to-day sales and support operations, customers are asking questions all the time. All you need to do is document them and their answers, put it on your website, and bingo! – You now have an FAQ.

FAQ Best Practices

It’s absolutely possible to make a terrible FAQ, but really easy not to. If you follow these guidelines when creating your FAQ, you’ll be set:

  • Talk to your sales and / or support teams about the questions that they are getting from customers. If you’re creating an FAQ, you want to be sure it’s answering questions that your customers actually have.
  • The best FAQ questions are broadly relevant and / or address an important question. If you have a question from a person with a niche application which would only be relevant to a small subset of the audience who is also using your product for that application, it’s probably not worthy of adding to the FAQ. If you have too much clutter, people won’t use it.
  • It’s really easy to end up with oceans of FAQ content. Your don’t want your FAQ content to fluster your audience because there is too much of it. In addition to being selective with what content makes the grade for your FAQ section, use design tools such as accordions to help minimize the content overload and help ensure that customers are only presented with the FAQ content which is most relevant to them.
  • Keep FAQ content on the page of the product / service it pertains to whenever possible. Forcing people to navigate away to FAQ content is usually neither a good navigational experience nor the best for SEO.
  • If you have a long FAQ section, try to keep the most important and / or broadly relevant information towards the top, where it will be more likely to be seen.

To give you a better idea of how you may be able to leverage FAQ content, let’s take a look at a few examples.

FAQ Critiques

Agilent’s website makes ample use of FAQ content, which is great. To give an example, I’ll look at the page for their 280FS AA Atomic Absorption Spectrometer. They have a lot of stuff on this page, but they use a left-hand navigation menu with anchor links to help users find the information they need. In the “Support” section there is an FAQ, along with other categories of content, each of which has an accordion feature.

FAQ section on a product page of the Agilent website

Agilent’s FAQ has a good amount of content in it, and they make it more manageable by only showing the questions. You have to click the question to see the answer. Unfortunately, when you click the question, you are directed to a page that has only that one question and answer on it, meaning the page is of relatively low value and has taken the user away from the bulk of the information they are seeking, leading to a sub-optimal user experience (you need to wait for the page to load, then click back to get back to where you were). Additionally, having many pages with “thin” content is far less beneficial from an SEO standpoint than having one page with lots of content. If, for instance, they instead had a nested accordion in which the answer dropped down when it was clicked, this would circumvent the need for individual pages for each answer while still showing a relatively manageable amount of information to each user.

Laboratory Supply Network also makes frequent use of FAQs. FAQs are perhaps of even greater value for distributors and resellers since these companies are often starved of unique content. FAQs, product reviews, and other mechanisms for generating unique content can both improve their SEO and differentiate them from competition who may be selling similar (or the same) products. As an example, we’ll use their Q500 FAQ on Homogenizers.net. Laboratory Supply Network puts their FAQs in a separate tab from other information on the product page, helping to prevent clutter. They also have all the FAQ information directly on the product page, which maximizes the SEO benefit. However, within the FAQ tab, there are no aids to help users find the information which may be of value to them. The only way to see which questions are answered is to scroll through them all – and through their answers. This is non-ideal, especially if there are a lot of questions and / or the questions have long answers. While users will scroll, too much scrolling decreases the likelihood that content near the bottom will be seen.

FAQ section on a product page of the Homogenizers.net website

In Conclusion

FAQs add value for your customer and improve the SEO of your website. As with just about any content generation effort, your primary question should be: “can we do this in a manner which is valuable for our audience?” If you have a complex product or service or there is any common uncertainties that customers have about your business, it’s likely that you can both deliver and receive value through an FAQ. Ensure that you’re following best practices, and you’ll maximize its value.

"Looking to create content which has a discernible impact on your business? Looking for practical, realistic means to improve your search marketing? BioBM helps life science companies with almost any marketing needs. Contact us today and learn how we can help build your company into a powerhouse brand with rapidly growing revenues."

We Just Got Skyscrapered

Just yesterday, we got skyscrapered. No, we didn’t get an office in a giant building or fly an ad from one or anything like that, nor is that some weird pop-culture thing that teenagers are putting on YouTube. We were the target of an attempt at “skyscraper marketing” … and I’m talking about it, so I guess it worked in a sense.

I’ll talk more about this particular instance in a moment, but first I wanted to give an intro to skyscraper marketing for anyone who isn’t familiar with it.

The “What” and “Why” of Skyscraper Marketing

Skyscraper marketing was one method which was popularized after Google’s 2013 Hummingbird algorithm update. To summarize the implications of that in brief: there was once a time when you could “trick” Google into thinking that your website was more important than it was by posting links around the internet pointing to your website. Hummingbird was the Google update that put an end to that once and for all and penalized websites that did not comply. From then on, if you wanted to prove your website’s importance (and thereby improve your search ranks), you needed to earn your backlinks organically.

That’s about the time when content marketing became more important. From that point, not only was it the validation that showed prospects you knew what you were talking about, but it was the primary tool at your disposal to influence your search rankings (beyond the basic on-site optimization, such as optimized URLs and title tags, that everyone does and therefore isn’t a real source of competitive advantage). The more shareable the content, the more backlinks it would likely get, and therefore the better it was for SEO.

Thus, Skyscraper Marketing was devised. At its most basic, I can break it down into a three step process:

  1. Find successful content.
  2. Improve upon it.*
  3. Share it with people who would be interested in it and, in turn, share it themselves.

*The necessity for improvement is debatable, but you do have to do something to it. More on that in a moment…

The “How” of Skyscraper Marketing

Skyscraper marketing is, essentially, a type of influencer marketing in that the important part is the last step – getting people with engaged audiences to share it. That being the case, there are two primary approaches (and you don’t have to choose between them – you can do both at the same time).

The first approach is the incremental improvement approach. You find some good content which you have something to add to / make better / pose a counterpoint to / etc., then distribute it to a bunch of people who would find it relevant and potentially want to share it. In this approach, you’re adding something to the general body of knowledge in the hope that your contributed insight is enough to make it a worthwhile share – especially from people who have large audiences themselves. Again, the goal is to get as many backlinks and as many eyeballs as possible (those goals do overlap) so the more people you reach out to the better.

The second approach is the “stroking one’s ego” approach. In this approach, your goal isn’t necessarily to improve upon good pieces of content, but rather to act as an aggregator. You take really good tidbits from the thinking of a number of different influencers, and repackage them into a single, easily digestible, and readily shareable piece of content, being sure to reference and link to the authors / posts whose thinking you aggregated. You then reach back out to those people and let them know that you published something which referenced them. People, being generally inclined towards things that make themselves seem important, will share your article which highlights their own thinking.

BioBM’s Skyscraper Marketing Tips

As with influencer marketing, you want to take care to do it correctly. If you don’t, you’ll not only waste your time and effort, but you’ll also get a reputation among the influencers in your market as a peddler of junk content. If that happens, skyscraper marketing or other forms of influencer marketing will be more difficult for you in the future. Just as poor quality content can reflect badly upon your brand, asking people to share poor quality content will erode your relationships with those influencers.

To not be “that guy,” here are some useful tips:

  • Don’t spam your network. Only send out good content and only send it to people who would find it genuinely relevant.
  • Don’t plagiarize copy … or ideas. If people realize they’ve heard it all before elsewhere, they probably won’t share it.
  • Note that “improved content” does not mean “longer content.” A lot of people have a habit of focusing on expanding upon an idea rather than improving upon it. Improvement is far more important than expansion. If you make something better or take a novel perspective on an idea, that’s far more worthy of sharing than simply adding more of the same.
  • “Improved content” also doesn’t mean that you need to improve on the idea itself. Communicating it more effectively – for instance, using illustration to more clearly demonstrate a complex point – can be just as valuable.
  • Always remember: your content behaves like a product and must be differentiated!
  • If you’re going to take an ego-driven approach, be sure you show that you have taken the time to fully understand and eloquently explain the idea, and give some praise to the original author without coming of as a flatterer.

So to finish the story…

Upon checking our social media dashboards this morning, I saw this tweet:

I’ve been published more than the average person, but that’s still enough to get my attention so I gave it a quick read through. I ended up not sharing it on our @BioBM twitter account (and I don’t use my personal @CHoytPhD twitter anymore) for a few reasons. Primarily, we have very high standards for what BioBM publishes through our channels. We generally require there to be some element of newness, and we didn’t find there to be any particularly fresh thinking. (Sorry, Joe! No offense intended.) Secondarily, it was a really obvious skyscraper attempt, especially since our idea which was shared wasn’t strongly relevant to the body of the article and was simply one of many listed in bullet point format towards the end. On the other hand, Joe did well not to plagiarize the ideas which he referenced, but rather offered a tidbit of them with a link to the source. That was nice of him. (Thanks, Joe!)

That said, it did engage a discussion on twitter and his post did end up being linked to on our blog, so I suppose Joe can claim victory after all. He’s also welcome to follow this shameless promotion for our “Marketing of Life Science Tools & Services” LinkedIn group and post it there as well. 2262 members and counting!

Just for fun, and because who doesn’t love architecture, here’s a few more images of skyscrapers. All images are courtesy of Unsplash, which in an amazing feat of generosity allows their beautiful, high-resolution images to be used for any purpose and without attribution. I find that so awesome that I’m giving them attribution anyway.


"Innovative companies deserve innovative marketing. If you want to leverage the next generation of marketing strategies to not only help you achieve success, but create genuine strategic advantage for your company, contact BioBM. It’s never too early or too late, but the sooner we get started the more of a head start you’ll have."

Why People Are Loyal … to ANYTHING

I was reading the MarketingCharts newsletter today and saw a headline: “What Brings Website Visitors Back for More?” The data was based on a survey of 1000 people, and they found the top 4 reasons were, in order:
1) They find it valuable
2) It’s easy to use
3) There is no better alternative for the function it serves
4) They like it’s mission / vision

Website Loyalty Data from MarketingCharts.com

I thought about it for a second and had a realization – this is why people are loyal to ANYTHING! And achieving these 4 things should be precisely our goal as marketers:
1) Clearly demonstrate value
2) Make your offerings – and your marketing – accessible
3) Show why your particular thing is the best. (Hint: If it’s not the best you probably need to refine your positioning to find the market segment that it is the best for.)
4) Tell your audiences WHY. Get them to buy into it. Don’t just drone on about the what, but sell them on an idea. Captivate them with a belief!

Do those 4 things well, you win.

BTW, the MarketingCharts newsletter is a really good, easy to digest newsletter – mostly B2C focused but there’s some great stuff in there even for a B2B audience and you can get most of the key points in each day’s newsletter under a minute.

"Captivate your customers’ loyalty. Contact BioBM and let’s turn your marketing program into a strategic advantage."

Are You Providing Self-Service Journeys?

Customers are owning more of their own decisions.

We’ve all heard the data on how customers are delaying contact with salespeople and owning more of their own decision journeys. Recent research from Forrester predicts that the share of B2B sales, by dollar value, conducted via e-commerce will increase by about a third from 2015 to 2020: from 9.3% to 12.1%. Why does Forrester see this number growing at such a rate? Primarily due to “channel-shifting B2B buyers” – people that are willfully conducting purchases entirely online rather than going through a manned sales channel.

All this adds up to more control of the journey residing with the customers themselves and less opportunities for salespeople to influence them. Your marketing needs to accommodate these control-desiring customers. It needs to accommodate as much of the buying journey as it can, and in many instances it can and should accommodate the entire buying journey – digitally.

Scientist considering an online purchase

Accommodating Digital Buying Journeys

Planning for the enablement of self-service journeys is a complex, multi-step process. In brief, it consists of:

  1. Understanding the relevant customer personas. Defining customer personas is always a somewhat ambiguous task, but my advice to those doing it is always not to over-define them. It’s easy to achieve so much granularity that the process of defining a customer persona becomes meaningless due to the presence of far too many personas with far too little to distinguish their journeys in a practical sense. It’s okay to paint with a broad brush. For a relatively small industry such as ours, factors such as “level of influence on the purchasing decision” and “familiarity with the technology” are far better than the commonly used definitions of B2C demographics which you’ll likely see used if you look up examples of creating customer personas. It probably doesn’t much matter if the scientist you’re defining is a millennial or Gen X-er, nor do you likely need to account for the difference between scientists and senior scientists. That’s not what’s important. Focus on the critical factors, and clear your mind of everything else.
  2. Mapping the journey for each persona. This can be done with data analytics, market research, and / or simply as a good old-fashioned thought experiment, depending on your resources and capabilities as well as how accurate you need to be. If you’re using data, use the customers who converted as examples and trace their buying journeys from the beginning (which will probably have online and offline components). Bin them each into the appropriate persona then use them to inform what the journey requires for each persona. The market research approach is fairly straightforward and can be done with any combination of interviews, focus groups, and user testing approaches. If you’re on a budget and just want to sit down and brainstorm out the decision journey, start with each “raw” customer persona, then ask “where does this person want to go next in his decision journey?” A scientist may want more information, they may desire a certain experience, etc. Continue asking that question until you get to the point of purchase.
  3. Mapping information or experiences to each step of the journey. Once you know the layout of the journeys and the goals at each step, it should be relatively clear what you need to provide the customer at each step to get them to move forward in their journey. This step is really just asking: “How will we address their needs at each discrete step of their journey?”
  4. Determine the most appropriate channel for the delivery of each experience. You now know what you’re going to deliver to each customer at each point in the decision journey to keep them moving forward, but how you deliver it is important as well. On paper, it might seem as though you can simply provide all the information and experiences the customer needs in one sitting and then that’s all they will need to complete their decision journey. In practice, it often doesn’t work that way. Decisions often involve multiple stakeholders and often take place over the course of days, weeks, or months. Few B2B life science purchasing decisions are conducted on impulse. For young or less familiar brands you may also need time for the scientist to develop sufficient familiarity with the brand in order to be comfortable purchasing from you. This is the time where you must consider not only the structure of the buying journey, but the somewhat less tangible elements of its progression. Structured correctly, your roadmap should essentially remove steps from the buying journey for the customer.
  5. Implement it! You now know what the scientists’ decision journeys look like and exactly how you’ll address them. Bring that knowledge into the real world and create a holistic digital experience that enables completion of the self-serve buying journey!
  6. That’s it! Your marketing is now ready for today’s (and tomorrow’s) digitally-inclined buyers.

    Owning the JourneyNetwork internet brain head

    What we’ve outlined above will create a digital experience that allows customers to complete a purchasing decision on their own terms, which is something they increasingly want to do. If you build such an experience you will give yourself a definite advantage, but your customers will still shop around. It’s not enough to get them to hone in solely on your brand (which, if we’re being honest, is an incredibly difficult task).

    Digital marketing is not only capable of enabling your scientist-customers to complete their decision journeys on their own, however. It is possible to create a digital experience that owns a hugely disproportionate share of the decision journey to provide outsized influence upon it. Such mechanisms are called decision engines, and when properly implemented they provide their creators with massive influence on their markets. If you would like to learn more about decision engines, check out this recent podcast we did on the topic with Life Science Marketing Radio or download our report on the topic.

    "Is your life science brand adopting to the changing nature of scientists’ buying journeys? If you’re not well on your way to completing your marketing’s digital transformation, then it’s probably time to call BioBM. Not only do we have the digital skill set to develop transformational capabilities for our life science clients, but we stay one step ahead with our strategies. We live in an age of constant change, and we work to ensure that our clients aren’t simply following today’s best practices, but are positioned to be the leaders of tomorrow. We’ll provide you with the next generation of marketing strategies, which will not only elevate your products and services, but turn your marketing program into a strategic advantage. So what are you waiting for?"

Avoid CPM Run of Site Ads

Not all impressions are created equal.

We don’t think about run of site (ROS) ads frequently as we don’t often use them. We try to be very intentional with our targeting. However, we recently had an engagement where we were asked to design ads for a display campaign on a popular industry website. The goal of the campaign was brand awareness (also something to avoid, but that’s for another post). The client was engaging with the publisher directly. We recommended the placement, designed the ads, and provided them to the client, figuring that was a done job. The client later returned to us to ask for more ad sizes because the publisher came back to them suggesting run of site ads because the desired placement was not available.

Some background for those less familiar with display advertising

If you are familiar with placement-based display advertising, you can skip this whole section. For the relative advertising novices, I’ll explain a little about various ad placements, their nomenclature, and how ads are priced.

An ad which is much wider than it is tall is generally referred to as a billboard, leaderboard, or banner ad. These are referred to as such because their placement on webpages is often near the top, although that is far from universally true, and even where it is true they often appear lower on the page as well. In our example on the right, which is a zoomed-out screenshot of the Lab Manager website, we see a large billboard banner at the top of the website (outlined in yellow), multiple interstitial banners of various sizes (in orange) and a small footer banner (green) which was snapped to the bottom of the page while I viewed it.

An ad which is much taller than it is wide is known as a skyscraper, although ones which are particularly large and a bit thicker may be called portraits, and large ads with 1:2 aspect ratios (most commonly 300 x 600 pixels) are referred to as half page ads. Lab Manager didn’t have those when I looked.

The last category of ad sizes is the square or rectangle ads. These are ads which do not have a high aspect ratio; generally less than 2:1. We can see one of those highlighted in purple. There is also some confusing nomenclature here: a very common ad of size 300 x 250 pixels is called a medium rectangle but you’ll also sometimes see it referred to as an MPU, and no one actually knows the original meaning of that acronym. You can think of it as mid-page unit or multi-purpose unit.

As you see, there are many different placements and ad sizes and it stands to reason that all of these will perform differently! If we were paying for these on a performance basis, say with cost-per-click, the variability in performance between the different placements would be self-correcting. If I am interested in a website’s audience and I’m paying per click, then I [generally] don’t care where on the page the click is coming from. However, publishers don’t like to charge on a per-click basis! If you are a publisher, this makes a lot of sense. You think of yourself as being in the business of attracting eyeballs. Even though to some extent they are, publishers do not want to be in the business of getting people to click on ads. They simply want to publish content which attracts their target market. Furthermore, they definitely don’t want their revenues to be at the whims of the quality of ads which their advertisers post, nor do they want to have to obtain and operate complex advertising technology to optimize for cost per view (generally expressed as cost per 1000 views, or CPM) when their advertisers are bidding based on cost per click (CPC).

What are Run Of Site Ads and why should you be cautious of them?

You may have noticed that the above discussion of ad sizes didn’t mention run of site ads. That is because run of site ads are not a particular placement nor a particular size. What “run of site” means is essentially that your ad can appear anywhere on the publisher’s website. You don’t get to pick.

Think about that. If your ads can appear anywhere, then where are they appearing in reality? They are appearing in the ad inventory which no one else wanted to buy. Your ads can’t appear in the placements which were sold. They can only appear in the placements which were not sold. If your insertion order specifies run of site ads, you are getting the other advertisers’ leftovers.

That’s not to say that ROS ads are bad in all circumstances, nor that publisher-side ad salespeople who try to sell them are trying to trick you in any way. There is nothing malicious going on. In order to get value from ROS ads, you need to do your homework and negotiate accordingly.

How to get good value from ROS ads

Any worthwhile publisher will be able to provide averaged metrics for their various ad placements. If you look at their pricing and stats you may find something like this:

Ad FormatCTRCPM
Multi-unit ROS0.05%$40
Billboard Banner0.35%$95
Medium Rectangle0.15%$50
Half Page0.10%$50
Leaderboard0.10%$45
These are made-up numbers from nowhere in particular, but they are fairly close to numbers you might find in the real world at popular industry websites. Your mileage may vary.

One good assumption is that if people aren’t clicking the ad, it means they’re not paying attention to it. There is no other reason why people would click one ad at a much higher rate than others. Averaged out over time, we cannot assume that the ads in those positions were simply better. Likewise, there would be no logical reason why the position of an ad alone would cause a person to be less likely to click on it aside from it not getting the person’s attention in the first place. This is why billboard banners have very high clickthrough rates (CTR): it’s the first thing you see at the top of the page. Publishers like to price large ads higher than smaller ads, but it’s not always the case that the larger ads have a higher CTR.

With that assumption, take the inventory offered and convert the CPM to CPC using the CTR. The math is simple: CPC = CPM / (1000 * CTR).

Ad FormatCTRCPMEffective CPC
Multi-unit ROS0.05%$40$80
Billboard Banner0.35%$95$27
Medium Rectangle0.15%$50$33
Half Page0.10%$50$50
Leaderboard0.10%$45$45
By converting to CPC, you have a much more realistic and practical perspective on the value of an ad position.

Here, we see those really “cheap” run of site ads are actually the most expensive on a per click basis, and the billboard banner is the cheapest! Again, even for more nebulous goals like brand awareness, we can only assume that CTR is a proxy for audience attentiveness. Without eye tracking or mouse pointer tracking data, which publishers are highly unlikely to provide, CTR is the best attentiveness proxy we have.

With this information, you can make the case to the publisher to drop the price of their ROS ads. They might do it. They might not. Most likely, they’ll meet you somewhere in the middle. By making a metrics-driven case to them, however, you’ll be more likely to get the best deal they are willing to offer. (ProTip: If you’re not picky when your ads run, go to a few publishers with a low-ball offer a week or so until end of the month. Most publishers sell ads on a monthly basis, and if they haven’t sold all their inventory, you’ll likely be able to pick it up at a cut rate. They get $0 for any inventory they don’t sell. Just be ready to move quickly.)

The other situation in which ROS ads are useful and can be a good value are when you want to buy up all the ad inventory. Perhaps a highly relevant publisher has a highly relevant feature and that all ads up to an audience you want to saturate. You can pitch a huge buy of ROS ads which will soak up the remaining inventory for the period of time when that feature is running, and potentially get good placements at the ROS price. Just make sure you know what you’re buying and the publisher isn’t trying to sell their best placements on the side.

Lessons

  • Run of site ads aren’t all bad, but novice advertisers can end up blowing a bunch of money if they’re not careful.
  • Regardless of placement, always be mindful of the metrics of the ads your buying.
  • Even if your goals campaign are goals are more attention than action-oriented, CPC is a good proxy for attentiveness.
"Want better ROI from your advertising campaigns? Contact BioBM. We’ll ensure your life science company is using the right strategies to get the most from your advertising dollars."

Can DALL-E 3 Generate Passable Life Science Images?

For those uninitiated to our blog, a few months ago I ran a fairly extensive, structured experiment to compare DALL-E 2, Midjourney 5, and Stable Diffusion 2 to see if any of them could potentially replace generic life science stock imagery. It ended up being both informative and accidentally hilarious, and you can see the whole thing here. But that was back in the far-gone yesteryear of July, it is currently December, and we live in the early era of AI which means that months are now years and whatever happened 5 months ago is surely obsolete. Since Dall-E 3 came out in October, it’s worth finding out if it will do better than it did in the previous round, where DALL-E 2 was notably inferior to Midjourney for 9 of the 10 queries.

Perhaps I’ll do a more comprehensive comparison again later, but for now I’m just going to run some similar queries to the ones used last time to get a reasonable side-by-side comparison. Bing Image Creator was used to generate the images since labs.openai.com, which was used last time, is still plugged in to DALL-E 2.

Test 1: A female scientist performing cell culture at a biosafety cabinet.

The last time we tried this, DALL-E 2 gave us images that looked 75% like a picture and 25% like claymation, but even if that problem wasn’t there it was still somewhat far off. Let’s see if DALL-E 3 can do better.

I tried to be a little bit descriptive with these prompts, as supposedly DALL-E 3 uses GPT4 and better understands written requests. Supposedly. Here’s what it gave me for “A photograph of a female scientist in a laboratory sitting at a biosafety cabinet holding a serological pipette performing cell culture. Her cell culture flasks have yellow caps and her cell culture media is red.” It definitely got the yellow caps and red media. As for the rest…

It’s immediately clear that DALL-E 3, just like all its ilk, was primarily trained from large repositories of generic stock images, because all these labs look like what you would imagine a lab would look like if you didn’t know what a lab actually looked like. There are plenty of generic microscopes close at hand, although it didn’t even get those right. There are no biosafety cabinets to be found. Those vessels are essentially test tubes, not cell culture flasks. To top it off, all the female scientists look like porcelain dolls modeling for the camera. I tried to fix at least one of those things and appended “She is attentive to her work.” to the subsequent query. Surprisingly, this time it seemed to make some subtle attempts at things which might be construed as biosafety cabinets, but only to a completely naive audience (and, of course, it put a microscope in one of them).

Since DALL-E 2 arguably provided more realistic looking people in our previous test, I reverted to the simplicity of the previously used query: “A photograph of a female scientist performing cell culture at a biosafety cabinet.”

We’re not getting any closer. I have to call this an improvement because it doesn’t look like the image is melting, but it’s still very far from usable for a multitude of reasons: the plasticware is wrong, the pipettes are wrong, the people still look like dolls, the biosafety cabinets aren’t right, some of the media seems to be growing alien contamination, the background environment isn’t realistic, etc.

Today’s comic relief is brought to you by my attempt to get it to stop drawing people as porcelain dolls. I Googled around a bit and found that queries structured differently sometimes are better at generating realistic looking people so I gave this prompt a go: “2023, professional photograph. a female scientist performing cell culture at a biosafety cabinet.” What a gift it gave me.

Test 2: Liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’m choosing this one because it was the only query that DALL-E 2 was almost good at in our previous comparison. Out of 10 tests in that experiment, Midjourney produced the best output 9 times and DALL-E once. This was that one. However, stock imagery was still better. DALL-E 2’s image didn’t capture any of the liquid handler and the look of the image was still a bit melty. Let’s see if it’s improved!

Prompt: “A close up photograph of liquid dripping from pipette tips on a high-throughput automated liquid handling system.”

DALL-E 3 seems to have eschewed realism entirely and instead picked up Midjourney’s propensity for movie stills and sci-fi. Perhaps more specificity will solve this.

Prompt 2: “A close up photograph of liquid being dispensed from pipette tips into a 96-well microplate in a high-throughput automated liquid handling system.”

DALL-E clearly only has a vague idea of what a 96-well plate looks like and also cannot count; none of these “plates” actually have 96 wells. Regardless, these are no more realistic, clearly unusable, and DALL-E 2’s output would likely have a far greater probability of passing as real.

So nope, we’re still not there yet, and Midjourney is probably still the best option for realistic looking life science images based on what I’ve seen so far.

… but what about micrographs and illustrations?

All the previous posts dealt with recreations of real-world images. What about images which a microscope would take or scientific illustrations? To test that out, I quickly tested out four prompts I had used last time:

  • A high-magnification fluorescent micrograph of neural tissues
  • A colored scanning electron micrograph of carcinoma cells
  • A ribbon diagram of a large protein showing quaternary structure
  • A 3D illustration of plasmacytes releasing antibodies

Here is the best it provided for each, in clockwise order from top left:

DALL-E 3’s neurons were actually worse than DALL-E 2’s, with nothing even being remotely close. It’s carcinomas were more in line with what Midjourney provided last time, but look slightly more cartoonish. The ribbon diagram is the better than any from the last test, although the structure is blatantly unrealistic. It’s plasmacytes could make for a passable graphic illustration, if only they contained anything that looks like antibodies.

Conclusion

DALL-E 3 is a clear improvement from DALL-E 2. While it may be two steps forward and one step back, overall it did provide outputs which were closer to being usable than in our last test. It still has a way to go, and I don’t think it will peel us away from defaulting to Midjourney, but if it continues to improve at this rate, DALL-E 4 could provide a breakthrough for the generation of life science stock images.

"Want brand to shine brighter than even DALL-E could imagine? Contact BioBM. We’ll win you the admiration and attention of your scientist customers."

Can AI Replace Life Science / Laboratory Stock Images?

We’re over half a year into the age of AI, and its abilities and limitations for both text and image generation are fairly well-known. However, the available AI platforms have had a number of improvements over the past months, and have become markedly better. We are slowly but surely getting to the point where generative image AIs know what hands should look like.

But do they know what science looks like? Are they a reasonable replacement for stock images? Those are the meaningful questions if they are going to be useful for the purposes of life science marketing. We set to answer them.

A Few Notes Before I Start Comparing Things

Being able to create images which are reasonably accurate representations is the bare minimum for the utility of AI in replacing stock imagery. Once we move past that, the main questions are those of price, time, and uniqueness.

AI tools are inexpensive compared with stock imagery. A mid-tier stock imagery site such as iStock or ShutterStock will charge roughly $10 per image if paid with credits or anywhere from $7 to roughly a quarter per image if you purchase a monthly subscription. Of course, if you want something extremely high-quality, images from Getty Images or a specialized science stock photo provider like Science Photo Library or ScienceSource can easily cost many hundreds of dollars per image. In comparison, Midjourney’s pro plan, which is $60 / month, gives you 30 hours of compute time. Each prompt will provide you with 4 images and generally takes around 30 seconds. You could, in theory, acquire 8 images per minute, meaning each costs 0.4 cents. (In practice, with the current generation of AI image generation tools, you are unlikely to get images which match your vision on the first try.) Dall-E’s pricing is even simpler: each prompt is one credit, also provides 4 images, and credits cost $0.13 each. Stable Diffusion is still free.

Having used stock image sites extensively, and having spent some time playing around with the current AI offerings for purposes other than business, it’s not clear to me which is more convenient and takes less time. Sometimes you’ll get lucky and get a good AI image the first try, but you could say the same about stock image sites. Where AI eliminates the need to go through pages and pages of stock images to find the right one, it replaces that with tweaking prompts and waiting for the images to generate. It should be noted that there is some learning curve to using AI as well. For instance, telling it to give you a “film still” or “photograph” if you want a representation of real life which isn’t meant to look illustrated and cartoonish. There’s a million of these tricks and each system has its own small library of commands which helps to be familiar with so you can get an optimal output. Ultimately, AI probably does take a little bit more time, but it also requires more skill. Mindlessly browsing for stock images is still much easier than trying to get a good output from a generative AI (although playing with AI is usually more fun).

Where stock images simply can’t compete at all is uniqueness. When you generate an image with an AI, it is a unique image. Every image generated is one of one. You don’t get the “oh, I’ve seen this before” feeling that you get with stock images, which is especially prevalent for life science / laboratory topics given the relatively limited supply of scientific stock images. We will probably, at some point in the not too distant future, get past the point of being able to identify an AI image meant to look real by the naked eye. Stock images have been around for over a century and the uniqueness problem has only become worse. It is inherent to the medium. The ability to solve that problem is what excites me most about using generative AI imagery for life science marketing.

The Experiment! Ground Rules

If this is going to be an experiment, it needs structure. Here is how it is going to work.

The image generators & stock photo sites used will be:

I was going to include ShutterStock but there’s a huge amount of overlap with iStock, I often find iStock to have slightly higher-quality images, and I don’t want to make more of a project out of this than it is already going to be.

I will be performing 10 searches / generations. To allow for a mix of ideas and concepts, some will be of people, some will be of things, I’ll toss in some microscopy-like images, and some will be of concepts which would normally be presented in an illustrated rather than photographed format. With the disclaimer that these concepts are taken solely from my own thoughts in hope of trying to achieve a good diversity of concepts, I will be looking for the following items:

  1. A female scientist performing cell culture at a biosafety cabinet.
  2. An Indian male scientist working with an LC-MS instrument.
  3. An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
  4. A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
  5. An NGS instrument on a bench in a genomics lab.
  6. A high-magnification fluorescent micrograph of neural tissues.
  7. A colored scanning electron micrograph of carcinoma cells.
  8. A ribbon diagram of a large protein showing quaternary structure.
  9. A 3D illustration of plasmacytes releasing antibodies.
  10. An illustration of DNA methylation.

Such that nothing has an edge, none of these are things which I have recently searched for on stock image sites nor which I have previously attempted to generate using AI tools. Note that these are solely the ideas which I am looking for. These are not necessarily the exact queries used when generating AI images or searching the stock photo sites.

Looking for stock images and generating AI graphics are very different processes but they both share one critical dimension: time. I will therefore be limiting myself to 5 minutes on each platform for each image. That’s a reasonable amount of time to try to either find a stock image or get a decent output from an AI. It will also ensure this experiment doesn’t take me two days. Here we go…

Round 1: A female scientist performing cell culture at a biosafety cabinet.

One thing that AI image generators are really bad at in the context of the life sciences is being able to identify and reproduce specific things. I thought that this one wouldn’t be too hard because these models are in large part trained on stock images and there’s a ton of stock images of cell culture, many of which look fairly similar. I quickly realized that this was going to be an exercise in absurdity and hilarity when DALL-E gave me a rack of 50 ml Corning tubes made of Play-Doh. I would be doing you a grave disservice if I did not share this hilarity with you, so I’ll present not only the best images which I get from each round, but also the worst. And oh, there are so many.

I can’t withhold the claymation 50 ml Corning tubes from you. It would just be wrong of me.

I also realized that the only real way to compensate for this within the constraints of a 5-minute time limit is to mash the generate button as fast as I can. When your AI only has a vague idea of what a biosafety cabinet might look like and it’s trying to faithfully reproduce them graphically, you want it to be able to grasp at as many straws as possible. Midjourney gets an edge here because I can run a bunch of generations in parallel.

Now, without further ado, the ridiculous ones…

Round 1 AI Fails

Dall-E produced a large string of images which looked less like cell culture than women baking lemon bars.

Midjourney had some very interesting takes on what cell culture should look like. My favorite is the one that looks like something in a spaceship and involves only machines. The woman staring at her “pipette” in the exact same manner I am staring at this half-pipette half-lightsaber over her neatly arranged, unracked tubes is pretty good as well. Side note: in that one I specifically asked for her to be pipetting a red liquid in a biosafety cabinet. It made the gloves and tube caps red. There is no liquid. There is no biosafety cabinet.

For those who have never used it, Stable Diffusion is hilariously awful at anything meant to look realistic. If you’ve ever seen AI images of melted-looking people with 3 arms and 14 fingers, it was probably Stable Diffusion. The “best” it gave me were things that could potentially be biosafety cabinets, but when it was off, boy was it off…

Rule number one of laboratories: hold things with your mouth. (Yes we are obviously kidding, do not do that.)

That was fun! Onto the “successes.”

Round 1 AI vs. Stock

Midjourney did a wonderful job of creating realistic-looking scientists in labs that you would only see in a movie. Also keeping with the movie theme, Midjourney thinks that everyone looks like a model; no body positivity required. It really doesn’t want people to turn the lights on, either. Still, the best AI results, by a country mile, were from Midjourney.

The best Dall-E could do is give me something that you might confuse as cell culture at a biosafety cabinet if you didn’t look at it and were just looking past it as you turned your head.

Stable Diffusion’s best attempts are two things which could absolutely be biosafety cabinets in Salvador Dali world. Also, that scientist on the right may require medical attention.

Stock image sites, on the other hand, produce some images of cell culture in reasonably realistic looking settings, and it took me way less than 5 minutes to find each. Here are images from iStock, Getty Images, and Science Photo Library, in that order:

First round goes to the stock image sites, all of which produced a better result than anything I could coax from AI. Round goes to stock sites. AI 0 – 1 Stock.

Round 2: An Indian male scientist working with an LC-MS instrument.

I am not confident that AI is going to know what an LC-MS looks like. But let’s find out!

One notable thing that I found is that the less specific you become, the easier it gets for the AI. The below image was a response to me prompting Dall-E for a scientist working with an LC-MS, but it did manage to output a realistic looking person in an environment that could be a laboratory. It’s not perfect and you could pick it apart if you look closely, but it’s pretty close.

A generic prompt like “photograph of a scientist in a laboratory” might work great in Midjourney, or even Dall-E, but the point of this experiment would be tossed out the window if I set that low of a bar.

Round 2 AI Fails

Midjourney:

Dall-E:

Stable Diffusion is terrible. It’s difficult to tell the worst ones from the best ones. I was going to call one of these the “best” but I’m just going to put them all here because they’re all ridiculous.

Round 2 AI vs. Stock

Midjourney once again output the best results by far, and had some valiant efforts…

… but couldn’t match the real thing. Images below are from iStock, Getty Images, and Science Photo Library, respectively.

Once thing you’ve likely noticed is that none of these are Indian men! While we found good images of scientists performing LC-MS, we couldn’t narrow it down to both race and gender. Sometimes you have to take what you can get! We were generally able to find images which show more diversity, however, and it’s worth noting that Science Photo Library had the most diverse selection (although many of their images which I found are editorial use only, which is very limiting from a marketing perspective).

Round 2 goes to the stock sites. AI 0 – 2 Stock.

Round 3: An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.

This should be easier all around.

Side note: I should’ve predicted this, but as the original query merely asked for science, my initial Midjourney query made it look like the lab was presenting something out of a sci-fi game. Looked cool, but not what we’re aiming for.

Round 3 AI Fails

Dall-E presented some interesting science on the genetic structure of dog kibble.

Dall-E seemed to regress with these queries, as if drawing more than one person correctly was just way too much to ask. It produced a huge stream of almost Picasso-esque people presenting something that vaguely resembled things which could, if sufficiently de-abstracted, be scientific figures. It’s as if it knows what it wants to show you but is drawing it with the hands of a 2 year old.

Stable Diffusion is just bad at this. This was the best it could do.

Round 3 AI vs. Stock

Take the gloves off, this is going to be a battle! While Midjourney continued its penchant for lighting which is more dramatic than realistic, it produced a number of beautiful images with “data” that, while they are extravagant for a lab meeting, could possibly be illustrations of some kind of life science. A few had some noticeable flaws – even Midjourney does some weird stuff with hands sometimes – but they largely seem usable. After all, the intent here is as a replacement for stock images. Such images generally wouldn’t be used in a way which would draw an inordinate amount of attention to them. And if someone does notice a small flaw that gives it away as an AI image, is that somehow worse than it clearly being stock? I’m not certain.

Stock images really fell short here. The problem is that people taking stock photos don’t have data to show, so they either don’t show anyone presenting anything, or they show them presenting something which betrays the image as generic stock. Therefore, to make them look like scientists, they put them in lab coats. Scientists, however, generally don’t wear lab coats outside the lab. It’s poor lab hygiene. Put a group of scientists in a conference room and it’s unusual that they’ll all be wearing lab coats.

That’s exactly what iStock had. Getty Images had an image of a single scientist presenting, but you didn’t see the people he was presenting to. Science Photo Library, which has far less to choose from, also didn’t have people presenting visible data. The three comps are below:

Side Note / ProTip: You can find that image from Getty Images, as well as many images that Getty Images labels as “royalty free” on iStock (or other stock image sites) for way less money. Getty will absolutely fleece you if you let them. Do a reverse image search to find the cheapest option.

Considering the initial idea we wanted to convey, I have to give this round to the AI. The images are unique, and while they lack some realism, so do the stock images.

Round 3 goes to AI. AI 1 – 2 Stock.

Let’s see if Dall-E or Stable Diffusion can do better in the other categories.

Round 4: A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’ve seen nice stock imagery of this before. Let’s see if AI can match it, and if I can readily find it again on the stock sites.

Round 4 AI Fails

Dall-E had a long string of images which looked like everything shown was made entirely of polystyrene and put in the autoclave at too high a temperature. You might have to click to expand to see the detail. It looks like everything partially melted, but then resolidified.

Stable Diffusion is more diffuse than stable. Three of these are the best that it did while the fourth is when it gave up and just started barfing visual static.

This is the first round where Midjourney, in my opinion, didn’t do the best job. Liquid handling systems have a fair amount of variability in how they can be presented, but pipette tips do not, and it didn’t seem to know what pipette tips should look like, nor how they would be arranged in a liquid handling system. These are the closest it got:

Very pretty! Not very accurate.

Round 4 AI vs. Stock

We have a new contestant for the AI team! Dall-E produced the most realistic looking image. Here you have it:

Not bad! Could it be an automated pipetting system? We can’t see it, but it’s possible. The spacing between the tips isn’t quite even and it looks like PCR strips rather than a plate, but hey, a microplate wasn’t part of the requirements here.

Let’s see what I can dig up for stock… Here’s iStock, Getty, and SPL, respectively:

I didn’t get the drips I was looking for – probably needed to dig more for that – but we did get some images which are obviously liquid handling systems in the process of dispensing liquids.

As valiant of an effort as Dall-E had, the images just aren’t clean enough to have the photorealism of real stock images. Round goes to the stock sites. AI 1 – 3 Stock.

Round 5: An NGS instrument on a bench in a genomics lab.

I have a feeling the higher-end stock sites are going to take this, as there aren’t a ton of NGS instruments so it might be overly specific for AI.

Round 5 AI Fails

Both Midjourney and Dall-E needed guidance that a next-generation sequencer wasn’t some modular device used for producing techno music.

With Dall-E, however, it proved to not be particularly trainable. I imagine it’s AI mind thinking: “Oh, you want a genome sequencer? How about if I write it for you in gibberish?” That was followed by it throwing it’s imaginary hands in the air and generating random imaginary objects for me.

Midjourney also had some pretty but far-out takes, such as this thing which looks much more like an alien version of a pre-industrial loom.

Round 5 AI vs. Stock

This gets a little tricky, because AI is never going to show you a specific genome sequencer, not to mention that if it did you could theoretically run into trademark issues. With that in mind, you have to give them a little bit of latitude. Genome sequencers come in enough shapes and sizes that there is no one-size-fits-all description of what one looks like. Similarly, there are few enough popular ones that unless you see a specific one, or its tell-tale branding, you might not know what it is. Can you really tell the function of one big gray plastic box from another just by looking at it? Given those constraints, I think Midjourney did a heck of a job:

There is no reason that a theoretical NGS instrument couldn’t look like any of these (although some are arguably a bit small). Not half bad! Let’s see what I can get from stock sites, which also will likely not want to show me logos.

iStock had a closeup photo of a Minion, which while it technically fits the description of what we were looking for, it doesn’t fit the intent. Aside from that it had a mediocre rendering of something supposed to be a sequencer and a partial picture of something rather old which might be an old Sanger sequencer?

After not finding anything at all on Getty Images, down to the wire right at the 5:00 mark I found a picture of a NovaSeq 6000. Science Photo Library had an image of an ABS SOLiD 4 on a bench in a lab with the lights off.

Unfortunately, Getty has identified the person in the image, meaning that even though you couldn’t ID the individual just by looking at the image, it isn’t suitable for commercial use. I’m therefore disqualifying that one. Is the oddly lit (and extremely expensive) picture of the SOLiD 4 or the conceptually off-target picture of the Minion better than what the AI came up with? I don’t think I can conclusively say either way, and one thing that I dislike doing as a marketer is injecting my own opinion where it shouldn’t be. The scientists should decide! For now, this will be a tie.

AI 1, Stock 3, Tie 1

Round 6: A high-magnification fluorescent micrograph of neural tissues.

My PhD is in neuroscience so I love this round. If Science Photo Library doesn’t win this round they should pack up and go home. Let’s see what we get!

Round 6 AI Fails

Dall-E got a rough, if not slightly cartoony, shape of neurons but never really coalesced into anything that looked like a genuine fluorescent micrograph (top left and top center in the image below). Stable Diffusion, on the other hand, was either completely off the deep end or just hoping that if it overexposed out-of-focus images enough that it could slide by (top right and bottom row).

Round 6 AI vs. Stock

Midjourney produced a plethora of stunning images. They are objectively beautiful and could absolutely be used in a situation where one only needed the concept of neurons rather than an actual, realistic-looking fluorescent micrograph.

They’re gorgeous, but they’re very obviously not faithful reproductions of what a fluorescent micrograph should look like.

iStock didn’t produce anything within the time limit. I found high-magnification images of neurons which were not fluorescent (probably colored TEM), fluorescent images of neuroblastomas (not quite right), and illustrations of neurons which were not as interesting as those above.

Getty Images did have some, but Science Photo Library had pages and pages of on-target results. SPL employees, you still have jobs.

A small selection from page 1 of 5.

AI 1, Stock 4, Tie 1

Round 7: A colored scanning electron micrograph of carcinoma cells.

This is another one where Science Photo Library should win handily, but there’s only one way to find out!

Round 7 AI Fails

None of the AI tools failed in such a spectacular way that it was funny. Dall-E produced results which suggested it almost understood the concept, although could never put it together. Here’s a representative selection from Dall-E:

… and from Stable Diffusion, which as expected was further off:

Round 7 AI vs. Stock

Midjourney actually got it, and if these aren’t usable, they’re awfully close. As with the last round, these would certainly be usable if you needed to communicate the concept of a colored SEM image of carcinoma cells more than you needed accurate imagery of them.

iStock didn’t have any actual SEM images of carcinomas which I could find within the time limit, and Midjourney seems to do just as good of a job as the best illustrations I found there:

Getty Images did have some real SEM images, but the ones of which I found were credited to Science Photo Library and their selection was absolutely dwarfed by SPL’s collection, which again had pages and pages of images of many different cancer cell types:

It just keeps going. There were 269 results.

Here’s where this gets difficult. On one hand, we have images from Midjourney which would take the place of an illustration and which cost me less than ten cents to create. On the other hand, we have actual SEM images from Science Photo Library that are absolutely incredible, not to mention real, but depending on how you want to use them, would cost somewhere in the $200 – $2000 range per photo.

To figure out who wins this round, I need to get back to the original premise: Can AI replace stock in life science marketing? These images are every bit as usable as the items from iStock. Are they as good as the images from SPL? No, absolutely not. But are marketers always going to want to spend hundreds of dollars for a single stock photo? No, absolutely not. There are times when it will be worth it, but many times it won’t be. That said, I think I have to call this round a tie.

AI 1, Stock 4, Tie 2

Round 8: A ribbon diagram of a large protein showing quaternary structure.

This is something that stock photo sites should have in droves, but we’ll find out. To be honest, for things like this I personally search for images with friendly licensing requirements on Wikimedia Commons, which in this case gives ample options. But that’s outside the scope of the experiment so on to round 8!

Round 8 AI Fails

I honestly don’t know why I’m still bothering with Stable Diffusion. The closest it got was something which might look like a ribbon diagram if you took a massive dose of hallucinogens, but it mostly output farts.

Dall-E was entirely convinced that all protein structures should have words on them (a universally disastrous yet hilarious decision from any AI image generator) and I could not convince it otherwise:

This has always baffled me, especially as it pertains to DALL-E, since it’s made by OpenAI, the creators of Chat GPT. You would think it would be able to at least output actual words, even if used nonsensically, but apparently we aren’t that far into the future yet.

Round 8 AI vs. Stock

While Midjourney did listen when I told it not to use words and provided the most predictably beautiful output, they are obviously not genuine protein ribbon diagrams. Protein ribbon diagrams are a thing with a very specific look, and this is not it.

I’m not going to bother digging through all the various stock sites because there isn’t a competitive entry from team AI. So here’s a RAF-1 dimer from iStock, and that’s enough for the win.

AI 1, Stock 5, Tie 2. At this point AI can no longer catch up to stock images, but we’re not just interested in what “team” is going to “win” so I’ll keep going.

Round 9: A 3D illustration of plasmacytes releasing antibodies.

I have high hopes for Midjourney on this. But first, another episode of “Stable Diffusion Showing Us Things”!

Round 9 AI Fails

Stable Diffusion is somehow getting worse…

DALL-E was closer, but also took some adventures into randomness.

Midjourney wasn’t initially giving me the results that I hoped for, so to test if it understood the concept of plasmacytes I provided it with only “plasmacytes” as a query. No, it doesn’t know what plasmacytes are.

Round 9 AI vs. Stock

I should just call this Midjourney vs. Stock. Regardless, Midjourney didn’t quite hit the mark. Plasmacytes have an inordinately large number of ways to refer to them (plasma cells, B lymphocytes, B cells, etc.) and it did eventually get the idea, but it never looked quite right and never got the antibodies right, either. It did get the concept of a cell releasing something, but those things look nothing like antibodies.

I found some options on iStock and Science Photo Library (shown below, respectively) almost immediately, and the SPL option is reasonably priced if you don’t need it in extremely high resolution, so my call for Midjourney has not panned out.

Stock sites get this round. AI 1, Stock 6, Tie 2.

Round 10: An illustration of DNA methylation.

This is fairly specific, so I don’t have high hopes for AI here. The main question in my mind is whether stock sites will have illustrations of methylation specifically. Let’s find out!

Round 10 AI Fails

I occasionally feel like I have to fight with Midjourney to not be so artistic all the time, but adding things like “realistic looking” or “scientific illustration of” didn’t exactly help.

Midjourney also really wanted DNA to be a triple helix. Or maybe a 2.5-helix?

I set the bar extremely low for Stable Diffusion and just tried to get it to draw me DNA. Doesn’t matter what style, doesn’t need anything fancy, just plain old DNA. It almost did! Once. (Top left below.) But in the process it also created a bunch of abstract mayhem (bottom row below).

With anything involving “methylation” in the query, DALL-E did that thing where it tries to replace accurate representation with what it thinks are words. I therefore tried to just give it visual instructions, but that proved far too complex.

Round 10 AI vs. Stock

I have to admit, I did not think that it was going to be this hard to get reasonably accurate representations of regular DNA out of Midjourney. It did produce some, but not many, and the best looked like it was made by Jacob the Jeweler. If methyl groups look like rhinestones, 10/10. Dall-E did produce some things that look like DNA stock images circa 2010. All of these have the correct helix orientation as well: right handed. That was a must.

iStock, Getty Images, and Science Photo Library all had multiple options for images to represent methylation. Here are one from each, shown in the aforementioned order:

The point again goes to stock sites.

Final Score: AI 1, Stock 7, Tie 2.

Conclusion / Closing Thoughts

Much like generative text AI, generative image AI shows a lot of promise, but doesn’t yet have the specificity and accuracy needed to be broadly useful. It has a way to go before it can reliably replace stock photos and illustrations of laboratory and life science concepts for marketing purposes. However, for concepts which are fairly broad or in cases where getting the idea across is sufficient, AI can sometimes act as a replacement for basic stock imagery. As for me, if I get a good feeling that AI could do the job and I’m not enthusiastic about the images I’m finding from lower-cost stock sites, I’ll most likely give Midjourney a go. Sixty dollars a month gets us functionally infinite attempts, so the value here is pretty good. If we get a handful of stock images out of it each month, that’s fine – and there’s some from this experiment we’ll certainly be keeping on hand!

I would not be particularly comfortable about the future if I was a stock image site, but especially for higher-quality or specialized / more specific images, AI has a long ways to go before it can replace them.

"Want your products or brand to shine even more than it does in the AI mind of Midjourney? Contact BioBM and let’s have a chat!"

Google Ads Auto-Applied Recommendations Are Terrible

Unfortunately, Google has attempted to make them ubiquitous.

Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.

Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.

The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.

As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.

Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.

“Save time.” Is that what we’re doing?

Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.

A Select Few Auto-Apply Recommendations Are Worth Leaving On

Google has a total of 23 recommendation types. Of those, I always leave on:

  • Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
  • Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
  • Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.

A whole 3/24. Some others are situationally useful, however:

  • Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
  • Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.

That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.

If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.

"Even GPT-4 still gets the facts wrong a lot. Some things simply merit human expertise, and Google Ads is one of them. When advertising to scientists, you need someone who understands scientists and speaks their language. BioBM’s PhD-studded staff and deep experience in life science marketing mean we understand your customers better than any other agency – and understanding is the key to great marketing.

Why not leverage our understanding to your benefit? Contact Us."

How to Write a Life Science White Paper

From the perspective of the marketer, a critical early task in the life science buying journey is education. It may even come before your audience of scientists recognizes they have a problem which needs a product or service to solve it. Once you have piqued their interest and seeded an idea in their minds, you need a lot more to get them across the finish line. Sometimes, a longer-form method of communication is merited, and that’s where the white paper comes in.

The Life Science Buying Journey

For those who are relatively new to this website, it should be expressed that I’m largely an adherent to Hamid Ghanadan’s viewpoint of the scientific buying journey, which views scientists as inherently both curious and skeptical. It’s illustrated in detail in his excellent book Persuading Scientists which is well-deserving of the long-overdue shout out. I’ve captured some of the concepts in a previous post: “The Four Key Types of Content.” To give the oversimplified TL;DR version of both:

  • The default state of scientists is curious. They readily take in information.
  • As they take in new information, they form ideas about it and transition from being curious to being skeptical.
  • If they cannot validate the information, they generally reject it.

You can see how a buying journey fits into this mindset:

  • The scientist is presented with a new idea.
  • As they learn more about this idea, they realize that they may need a product or service.
  • The critically evaluate the product(s) / service(s) presented to them.
  • A decision is made.

The goal of the marketer is to seed the scientist’s curiosity, continuing to provide them with information which will shape their viewpoint in your favor without engaging skepticism too early. That is how you maximize your chances of a positive purchasing decision.

Understanding What a White Paper Is … and Isn’t

A white paper is intended to provide either educational content (helpful, customer-centric information) or validation content (information which verifies a belief that the customers hold or a claim that the brand is making which may be customer-centric or product-centric). In either situation, the primary purpose is to inform your audience. Novice marketers may consider the format (usually pdf) and conflate a white paper with a brochure but they are two very different things.

All marketing documents exist on a rhetorical sliding scale between being fully informational and fully promotional. A brochure would be far onto the promotional side of that scale; it is extremely product-centric and its purpose is largely to encourage a purchase. A white paper would be most of the way towards the informational side of that scale. Creating a white paper which is overly promotional risks engaging the scientists’ skepticism before they have adopted your viewpoint, creating a situation where their inclination is to disbelieve you. This situation generally results in them rejecting your offering.

Writing Copy for an Effective White Paper

Your white paper should be about:

  • a single topic
  • which is of interest to your audience
  • of which you know substantially more than your audience

This may seem simple, but framing it can be difficult.

Presumably, your company is in the business of solving some type of problems for life scientists. They might not know what their problem is, but you do. Why should they care? Why is what you are doing compelling? You almost certainly have answers to these questions, but you likely have them framed in the context of your product. How can you take those answers and communicate them in a manner which is customer-centric instead of product-centric? Start by talking about your scientist-customers’ problem rather than your solution and you’ll be headed in the right direction.

There are times when a more product-focused white paper can be appropriate, however. For instance, you may have a new technology which is unfamiliar to your audience and you need to educate them about it. In this case, you have to talk about your solution to some extent. When that is the case, be sure to focus on providing information about the technology, not promotion for the product. You need to take care to ensure the information is objective, communicated in a unbiased manner, is well-referenced with independent sources, and uses independent voices (e.g. voice of the customer) wherever an opinion is necessary.

Formatting a White Paper Effectively

There is no particular length restriction on a life science white paper, but if you are calling it a white paper, your audience is likely expecting it to be somewhat in depth. A two-page minimum for a white paper is a good guideline to adhere to. For much longer white papers, you should consider yourselves constrained by your ability to maintain your audience’s attention. Demonstrating your expertise does not mean writing more than you need to. As is almost always the case, less is more. Be as concise as you can while fully communicating your point.

Avoid walls of text. Too many words and not enough visuals will make your audience less likely to get through your content. Use illustrations where possible, and don’t feel bad using relevant stock imagery to break things up. Ensure the document isn’t boring to the eyes by using brand-relevant colors, shapes, iconography, and other visuals. Ideally, you should have a generalized white paper format which you maintain throughout all of your documents to provide consistency. You want people who read your white paper to know it is your brand’s white paper, even if they didn’t see a logo.

Circling back on what a white paper is and isn’t, you’ll recall that we need a primarily informational document. However, you might not want an entirely informational document. Your job is to sell things, and purely informational things are generally not great at selling. You want to sprinkle some promotion in there. But how? Through creative use of formatting! You don’t want people to become skeptical of the information you are providing them in the body of the white paper, so don’t put promotional content in the body of the white paper! Use clearly-delineated sections to cordon off your promotional content. Help prevent skepticism of your promotional messages by using voice-of-customer (testimonials, etc.) whenever possible. You can also leave your promotional messages to when customers will most expect it – the end of the document. Like almost all effective marketing documents, you don’t want to leave out the call-to-action!

This is a stock image of life science brochure templates and doesn’t say anything meaningful at all, but you probably stopped to look at them because they’re visually appealing.

Deploy Your White Paper Effectively

Far too often, life science companies will write a really good white paper then tuck them off in some remote corner of their website. You have it, use it! Post about it on social media (more than once!), put it somewhere on your website which is relevant but readily findable by anyone looking for that kind of information, and blast it out in an email to a well-segmented section of your audience. If appropriate, use it as the hook for a well-targeted paid advertising campaign. The worst thing you can do after spending the time and resources to create a white paper is to only have a few dozen people ever read it.

Presumably you’ll be using your white paper to generate leads and will therefore have it gated with a download form (although you certainly don’t have to). If it is gated, create a compelling download page for your white paper which previews just enough of the content to make the audience want more but without giving up its most important lessons.

Recap on Effective Life Science White Papers

To write an effective white paper:

  • Understand where your white paper fits within the customer journey.
  • Maintain its primarily informational purpose.
  • Keep to one topic which will be of interest to your audience.
  • Focus on information which most of your audience likely will not know.
  • Allow what you have to communicate to dictate the length.
  • Don’t skimp on the visuals.
  • Clearly separate any promotional messages to avoid creating skepticism about the core topic.
  • Shout it from the rooftops to get attention to it!

White papers are centerpieces of many life science demand generation campaigns. By understanding and implementing these guidelines, they can help drive successful lead generation for your life science company as well.

"Not sure how to best deploy content to help fuel your marketing efforts? Experiencing writer’s block? Don’t spend time fretting, just contact BioBM. Our life science marketing experts are here to help innovative companies like yours craft purposeful, effective content to influence your scientist-customers and encourage them into action."

Stop Hosting Your Own Videos

I know this isn’t going to apply to 90% of you, and to anyone who is thinking “of course – why would anyone do that?” – I apologize for taking your time. Those people who see this as obvious can stop reading. What that 90% may not know, however, is that the other 10% still think, for some terrible reason, that hosting their own videos is a good idea. So, allow me to state conclusively:

Hosting your own videos is always a terrible decision. Let’s elaborate.

Reasons Why Hosting Your Own Videos Is A Terrible Decision:

  1. Your audience is not patient. If you think they’re going to wait through more than one or two (if you’re lucky) periods of buffering, you’re wrong. Videos are expensive to produce. If you’re putting in the resources to make a video, chances are you want as much of your audience as possible to see it. Buffering will ensure they don’t.
  2. Your servers are not built for this. Your website is most likely hosted on a server which is designed to serve up webpages. Streaming video content is probably not your host’s cup of tea. In fact, they’d probably rather you not do it (or tell you to buy a super-expensive hosting plan to accommodate the bandwidth requirements of streaming video).
  3. Your video compression is probably terrible. Your video editing software certainly will export your video into a compressed file. “Compressed,” in this sense, means not the giant, unwieldy raw data file that you would otherwise have. It does not mean “small enough to stream effectively.” You know whose video compression is next-level from anything else you’re going to find? YouTube, Vimeo, or probably most other major services that stream video on the internet as a business.
  4. There are companies that do this professionally. When I was in undergrad and majoring in chemical engineering, the other majors jokingly referred to us as “glorified plumbers,” but I don’t touch pipes. I don’t know the first thing about plumbing. So what do I do when I get a leak? I call a plumber, because they’ll definitely solve the problem far better than I would. Likewise, if you want to host video, why not get a professional video hosting service? There’s plenty of them out there, including some that are both very reputable and inexpensive.

An Example

I’m at my office on a reasonably fast internet connection. It’s cable, not fiber optic, but it’s also 11:30 in the morning – not prime “Netflix and chill” time when the intertubes are clogged up with people binge watching a full season of House of Cards. Just to show you that any bandwidth problems aren’t on my end, I did an Ookla Speedtest:

The internet is fast.

239 Mbps. Not tech school campus internet kind of fast, but more than fast enough to stream multiple YouTube videos at 4k if I wanted to.

And now for the example… I’m not going to tell you whose video this is, but they have an ~1 minute long video to show how easy their product is to use. Luckily for me, they don’t have a lot of branding on it so I can use them as an example without shaming them. The below screenshots are where the video stopped to buffer. Note that the video was not fullscreened and was about 1068 x 600. You can click the images to see them full size and see the progress bar and time at the bottom.

Made it 18 seconds! Off to a slightly less than disastrous start…

28 seconds. Getting there…

Well that didn’t go far. 32 seconds.

37 seconds. There’s no way I’d still be watching this if I wasn’t doing this for the purposes of demonstration.

42 seconds…

51 seconds! Almost there!

“Done” … or not quite done. 56 seconds. I don’t even know why it stopped to buffer here as almost the entire rest of the video was already downloaded.

The video stopped playing 7 times in the span of 64 seconds.

What To Do Instead

Perhaps the most well-known paid video hosting service, Vimeo has a pro subscription that will allow you to embed ad-free videos without their branding on it for $20 / month. There’s a bunch of other, similar services out there as well. Or, if you don’t want to spend anything and don’t mind the possibility of an ad being shown prior to your video, you can just embed YouTube videos. The recommended videos which show after playback can be easily turned off in the embed options. You can even turn off the video title and player controls if you don’t want your audience to be able to click through to YouTube or see the bar at the bottom (although the latter also makes them unable to navigate through your video).

Basically, if you want your videos to actually get watched, do anything other than hosting them yourself.

P.S. – If you’ve read all this and still think hosting your own videos is the correct solution, which it’s not, here’s a tip: upload them to YouTube, then download them using a tool like ClipConverter. This way you’ll at least get the benefit of YouTube’s video compression, which is probably the best in the world.

"Want marketing communications that truly captivate and engage your customers? It’s time to contact BioBM. Our life science marketing experts are here to help innovative companies better reach, influence, and convert scientists."

FAQs: Content and SEO’s Low-Hanging Fruit

Creating content in support of your products and services is hard. Finding something to say which is both unique and valuable to the audience is a non-trivial endeavor, however it remains critical for persuading your audience that your product or service is right for them … and persuading search engines that your website is important.

That said, it’s incredible how many brands overlook this one simple, effective, easy-to-create content tool: the FAQ.

You don’t even have to do the thinking for an FAQ. Your customers do it for you. In your day-to-day sales and support operations, customers are asking questions all the time. All you need to do is document them and their answers, put it on your website, and bingo! – You now have an FAQ.

FAQ Best Practices

It’s absolutely possible to make a terrible FAQ, but really easy not to. If you follow these guidelines when creating your FAQ, you’ll be set:

  • Talk to your sales and / or support teams about the questions that they are getting from customers. If you’re creating an FAQ, you want to be sure it’s answering questions that your customers actually have.
  • The best FAQ questions are broadly relevant and / or address an important question. If you have a question from a person with a niche application which would only be relevant to a small subset of the audience who is also using your product for that application, it’s probably not worthy of adding to the FAQ. If you have too much clutter, people won’t use it.
  • It’s really easy to end up with oceans of FAQ content. Your don’t want your FAQ content to fluster your audience because there is too much of it. In addition to being selective with what content makes the grade for your FAQ section, use design tools such as accordions to help minimize the content overload and help ensure that customers are only presented with the FAQ content which is most relevant to them.
  • Keep FAQ content on the page of the product / service it pertains to whenever possible. Forcing people to navigate away to FAQ content is usually neither a good navigational experience nor the best for SEO.
  • If you have a long FAQ section, try to keep the most important and / or broadly relevant information towards the top, where it will be more likely to be seen.

To give you a better idea of how you may be able to leverage FAQ content, let’s take a look at a few examples.

FAQ Critiques

Agilent’s website makes ample use of FAQ content, which is great. To give an example, I’ll look at the page for their 280FS AA Atomic Absorption Spectrometer. They have a lot of stuff on this page, but they use a left-hand navigation menu with anchor links to help users find the information they need. In the “Support” section there is an FAQ, along with other categories of content, each of which has an accordion feature.

FAQ section on a product page of the Agilent website

Agilent’s FAQ has a good amount of content in it, and they make it more manageable by only showing the questions. You have to click the question to see the answer. Unfortunately, when you click the question, you are directed to a page that has only that one question and answer on it, meaning the page is of relatively low value and has taken the user away from the bulk of the information they are seeking, leading to a sub-optimal user experience (you need to wait for the page to load, then click back to get back to where you were). Additionally, having many pages with “thin” content is far less beneficial from an SEO standpoint than having one page with lots of content. If, for instance, they instead had a nested accordion in which the answer dropped down when it was clicked, this would circumvent the need for individual pages for each answer while still showing a relatively manageable amount of information to each user.

Laboratory Supply Network also makes frequent use of FAQs. FAQs are perhaps of even greater value for distributors and resellers since these companies are often starved of unique content. FAQs, product reviews, and other mechanisms for generating unique content can both improve their SEO and differentiate them from competition who may be selling similar (or the same) products. As an example, we’ll use their Q500 FAQ on Homogenizers.net. Laboratory Supply Network puts their FAQs in a separate tab from other information on the product page, helping to prevent clutter. They also have all the FAQ information directly on the product page, which maximizes the SEO benefit. However, within the FAQ tab, there are no aids to help users find the information which may be of value to them. The only way to see which questions are answered is to scroll through them all – and through their answers. This is non-ideal, especially if there are a lot of questions and / or the questions have long answers. While users will scroll, too much scrolling decreases the likelihood that content near the bottom will be seen.

FAQ section on a product page of the Homogenizers.net website

In Conclusion

FAQs add value for your customer and improve the SEO of your website. As with just about any content generation effort, your primary question should be: “can we do this in a manner which is valuable for our audience?” If you have a complex product or service or there is any common uncertainties that customers have about your business, it’s likely that you can both deliver and receive value through an FAQ. Ensure that you’re following best practices, and you’ll maximize its value.

"Looking to create content which has a discernible impact on your business? Looking for practical, realistic means to improve your search marketing? BioBM helps life science companies with almost any marketing needs. Contact us today and learn how we can help build your company into a powerhouse brand with rapidly growing revenues."

We Just Got Skyscrapered

Just yesterday, we got skyscrapered. No, we didn’t get an office in a giant building or fly an ad from one or anything like that, nor is that some weird pop-culture thing that teenagers are putting on YouTube. We were the target of an attempt at “skyscraper marketing” … and I’m talking about it, so I guess it worked in a sense.

I’ll talk more about this particular instance in a moment, but first I wanted to give an intro to skyscraper marketing for anyone who isn’t familiar with it.

The “What” and “Why” of Skyscraper Marketing

Skyscraper marketing was one method which was popularized after Google’s 2013 Hummingbird algorithm update. To summarize the implications of that in brief: there was once a time when you could “trick” Google into thinking that your website was more important than it was by posting links around the internet pointing to your website. Hummingbird was the Google update that put an end to that once and for all and penalized websites that did not comply. From then on, if you wanted to prove your website’s importance (and thereby improve your search ranks), you needed to earn your backlinks organically.

That’s about the time when content marketing became more important. From that point, not only was it the validation that showed prospects you knew what you were talking about, but it was the primary tool at your disposal to influence your search rankings (beyond the basic on-site optimization, such as optimized URLs and title tags, that everyone does and therefore isn’t a real source of competitive advantage). The more shareable the content, the more backlinks it would likely get, and therefore the better it was for SEO.

Thus, Skyscraper Marketing was devised. At its most basic, I can break it down into a three step process:

  1. Find successful content.
  2. Improve upon it.*
  3. Share it with people who would be interested in it and, in turn, share it themselves.

*The necessity for improvement is debatable, but you do have to do something to it. More on that in a moment…

The “How” of Skyscraper Marketing

Skyscraper marketing is, essentially, a type of influencer marketing in that the important part is the last step – getting people with engaged audiences to share it. That being the case, there are two primary approaches (and you don’t have to choose between them – you can do both at the same time).

The first approach is the incremental improvement approach. You find some good content which you have something to add to / make better / pose a counterpoint to / etc., then distribute it to a bunch of people who would find it relevant and potentially want to share it. In this approach, you’re adding something to the general body of knowledge in the hope that your contributed insight is enough to make it a worthwhile share – especially from people who have large audiences themselves. Again, the goal is to get as many backlinks and as many eyeballs as possible (those goals do overlap) so the more people you reach out to the better.

The second approach is the “stroking one’s ego” approach. In this approach, your goal isn’t necessarily to improve upon good pieces of content, but rather to act as an aggregator. You take really good tidbits from the thinking of a number of different influencers, and repackage them into a single, easily digestible, and readily shareable piece of content, being sure to reference and link to the authors / posts whose thinking you aggregated. You then reach back out to those people and let them know that you published something which referenced them. People, being generally inclined towards things that make themselves seem important, will share your article which highlights their own thinking.

BioBM’s Skyscraper Marketing Tips

As with influencer marketing, you want to take care to do it correctly. If you don’t, you’ll not only waste your time and effort, but you’ll also get a reputation among the influencers in your market as a peddler of junk content. If that happens, skyscraper marketing or other forms of influencer marketing will be more difficult for you in the future. Just as poor quality content can reflect badly upon your brand, asking people to share poor quality content will erode your relationships with those influencers.

To not be “that guy,” here are some useful tips:

  • Don’t spam your network. Only send out good content and only send it to people who would find it genuinely relevant.
  • Don’t plagiarize copy … or ideas. If people realize they’ve heard it all before elsewhere, they probably won’t share it.
  • Note that “improved content” does not mean “longer content.” A lot of people have a habit of focusing on expanding upon an idea rather than improving upon it. Improvement is far more important than expansion. If you make something better or take a novel perspective on an idea, that’s far more worthy of sharing than simply adding more of the same.
  • “Improved content” also doesn’t mean that you need to improve on the idea itself. Communicating it more effectively – for instance, using illustration to more clearly demonstrate a complex point – can be just as valuable.
  • Always remember: your content behaves like a product and must be differentiated!
  • If you’re going to take an ego-driven approach, be sure you show that you have taken the time to fully understand and eloquently explain the idea, and give some praise to the original author without coming of as a flatterer.

So to finish the story…

Upon checking our social media dashboards this morning, I saw this tweet:

I’ve been published more than the average person, but that’s still enough to get my attention so I gave it a quick read through. I ended up not sharing it on our @BioBM twitter account (and I don’t use my personal @CHoytPhD twitter anymore) for a few reasons. Primarily, we have very high standards for what BioBM publishes through our channels. We generally require there to be some element of newness, and we didn’t find there to be any particularly fresh thinking. (Sorry, Joe! No offense intended.) Secondarily, it was a really obvious skyscraper attempt, especially since our idea which was shared wasn’t strongly relevant to the body of the article and was simply one of many listed in bullet point format towards the end. On the other hand, Joe did well not to plagiarize the ideas which he referenced, but rather offered a tidbit of them with a link to the source. That was nice of him. (Thanks, Joe!)

That said, it did engage a discussion on twitter and his post did end up being linked to on our blog, so I suppose Joe can claim victory after all. He’s also welcome to follow this shameless promotion for our “Marketing of Life Science Tools & Services” LinkedIn group and post it there as well. 2262 members and counting!

Just for fun, and because who doesn’t love architecture, here’s a few more images of skyscrapers. All images are courtesy of Unsplash, which in an amazing feat of generosity allows their beautiful, high-resolution images to be used for any purpose and without attribution. I find that so awesome that I’m giving them attribution anyway.


"Innovative companies deserve innovative marketing. If you want to leverage the next generation of marketing strategies to not only help you achieve success, but create genuine strategic advantage for your company, contact BioBM. It’s never too early or too late, but the sooner we get started the more of a head start you’ll have."

Why People Are Loyal … to ANYTHING

I was reading the MarketingCharts newsletter today and saw a headline: “What Brings Website Visitors Back for More?” The data was based on a survey of 1000 people, and they found the top 4 reasons were, in order:
1) They find it valuable
2) It’s easy to use
3) There is no better alternative for the function it serves
4) They like it’s mission / vision

Website Loyalty Data from MarketingCharts.com

I thought about it for a second and had a realization – this is why people are loyal to ANYTHING! And achieving these 4 things should be precisely our goal as marketers:
1) Clearly demonstrate value
2) Make your offerings – and your marketing – accessible
3) Show why your particular thing is the best. (Hint: If it’s not the best you probably need to refine your positioning to find the market segment that it is the best for.)
4) Tell your audiences WHY. Get them to buy into it. Don’t just drone on about the what, but sell them on an idea. Captivate them with a belief!

Do those 4 things well, you win.

BTW, the MarketingCharts newsletter is a really good, easy to digest newsletter – mostly B2C focused but there’s some great stuff in there even for a B2B audience and you can get most of the key points in each day’s newsletter under a minute.

"Captivate your customers’ loyalty. Contact BioBM and let’s turn your marketing program into a strategic advantage."

Are You Providing Self-Service Journeys?

Customers are owning more of their own decisions.

We’ve all heard the data on how customers are delaying contact with salespeople and owning more of their own decision journeys. Recent research from Forrester predicts that the share of B2B sales, by dollar value, conducted via e-commerce will increase by about a third from 2015 to 2020: from 9.3% to 12.1%. Why does Forrester see this number growing at such a rate? Primarily due to “channel-shifting B2B buyers” – people that are willfully conducting purchases entirely online rather than going through a manned sales channel.

All this adds up to more control of the journey residing with the customers themselves and less opportunities for salespeople to influence them. Your marketing needs to accommodate these control-desiring customers. It needs to accommodate as much of the buying journey as it can, and in many instances it can and should accommodate the entire buying journey – digitally.

Scientist considering an online purchase

Accommodating Digital Buying Journeys

Planning for the enablement of self-service journeys is a complex, multi-step process. In brief, it consists of:

  1. Understanding the relevant customer personas. Defining customer personas is always a somewhat ambiguous task, but my advice to those doing it is always not to over-define them. It’s easy to achieve so much granularity that the process of defining a customer persona becomes meaningless due to the presence of far too many personas with far too little to distinguish their journeys in a practical sense. It’s okay to paint with a broad brush. For a relatively small industry such as ours, factors such as “level of influence on the purchasing decision” and “familiarity with the technology” are far better than the commonly used definitions of B2C demographics which you’ll likely see used if you look up examples of creating customer personas. It probably doesn’t much matter if the scientist you’re defining is a millennial or Gen X-er, nor do you likely need to account for the difference between scientists and senior scientists. That’s not what’s important. Focus on the critical factors, and clear your mind of everything else.
  2. Mapping the journey for each persona. This can be done with data analytics, market research, and / or simply as a good old-fashioned thought experiment, depending on your resources and capabilities as well as how accurate you need to be. If you’re using data, use the customers who converted as examples and trace their buying journeys from the beginning (which will probably have online and offline components). Bin them each into the appropriate persona then use them to inform what the journey requires for each persona. The market research approach is fairly straightforward and can be done with any combination of interviews, focus groups, and user testing approaches. If you’re on a budget and just want to sit down and brainstorm out the decision journey, start with each “raw” customer persona, then ask “where does this person want to go next in his decision journey?” A scientist may want more information, they may desire a certain experience, etc. Continue asking that question until you get to the point of purchase.
  3. Mapping information or experiences to each step of the journey. Once you know the layout of the journeys and the goals at each step, it should be relatively clear what you need to provide the customer at each step to get them to move forward in their journey. This step is really just asking: “How will we address their needs at each discrete step of their journey?”
  4. Determine the most appropriate channel for the delivery of each experience. You now know what you’re going to deliver to each customer at each point in the decision journey to keep them moving forward, but how you deliver it is important as well. On paper, it might seem as though you can simply provide all the information and experiences the customer needs in one sitting and then that’s all they will need to complete their decision journey. In practice, it often doesn’t work that way. Decisions often involve multiple stakeholders and often take place over the course of days, weeks, or months. Few B2B life science purchasing decisions are conducted on impulse. For young or less familiar brands you may also need time for the scientist to develop sufficient familiarity with the brand in order to be comfortable purchasing from you. This is the time where you must consider not only the structure of the buying journey, but the somewhat less tangible elements of its progression. Structured correctly, your roadmap should essentially remove steps from the buying journey for the customer.
  5. Implement it! You now know what the scientists’ decision journeys look like and exactly how you’ll address them. Bring that knowledge into the real world and create a holistic digital experience that enables completion of the self-serve buying journey!
  6. That’s it! Your marketing is now ready for today’s (and tomorrow’s) digitally-inclined buyers.

    Owning the JourneyNetwork internet brain head

    What we’ve outlined above will create a digital experience that allows customers to complete a purchasing decision on their own terms, which is something they increasingly want to do. If you build such an experience you will give yourself a definite advantage, but your customers will still shop around. It’s not enough to get them to hone in solely on your brand (which, if we’re being honest, is an incredibly difficult task).

    Digital marketing is not only capable of enabling your scientist-customers to complete their decision journeys on their own, however. It is possible to create a digital experience that owns a hugely disproportionate share of the decision journey to provide outsized influence upon it. Such mechanisms are called decision engines, and when properly implemented they provide their creators with massive influence on their markets. If you would like to learn more about decision engines, check out this recent podcast we did on the topic with Life Science Marketing Radio or download our report on the topic.

    "Is your life science brand adopting to the changing nature of scientists’ buying journeys? If you’re not well on your way to completing your marketing’s digital transformation, then it’s probably time to call BioBM. Not only do we have the digital skill set to develop transformational capabilities for our life science clients, but we stay one step ahead with our strategies. We live in an age of constant change, and we work to ensure that our clients aren’t simply following today’s best practices, but are positioned to be the leaders of tomorrow. We’ll provide you with the next generation of marketing strategies, which will not only elevate your products and services, but turn your marketing program into a strategic advantage. So what are you waiting for?"