We’re over half a year into the age of AI, and its abilities and limitations for both text and image generation are fairly well-known. However, the available AI platforms have had a number of improvements over the past months, and have become markedly better. We are slowly but surely getting to the point where generative image AIs know what hands should look like.
But do they know what science looks like? Are they a reasonable replacement for stock images? Those are the meaningful questions if they are going to be useful for the purposes of life science marketing. We set to answer them.
A Few Notes Before I Start Comparing Things
Being able to create images which are reasonably accurate representations is the bare minimum for the utility of AI in replacing stock imagery. Once we move past that, the main questions are those of price, time, and uniqueness.
AI tools are inexpensive compared with stock imagery. A mid-tier stock imagery site such as iStock or ShutterStock will charge roughly $10 per image if paid with credits or anywhere from $7 to roughly a quarter per image if you purchase a monthly subscription. Of course, if you want something extremely high-quality, images from Getty Images or a specialized science stock photo provider like Science Photo Library or ScienceSource can easily cost many hundreds of dollars per image. In comparison, Midjourney’s pro plan, which is $60 / month, gives you 30 hours of compute time. Each prompt will provide you with 4 images and generally takes around 30 seconds. You could, in theory, acquire 8 images per minute, meaning each costs 0.4 cents. (In practice, with the current generation of AI image generation tools, you are unlikely to get images which match your vision on the first try.) Dall-E’s pricing is even simpler: each prompt is one credit, also provides 4 images, and credits cost $0.13 each. Stable Diffusion is still free.
Having used stock image sites extensively, and having spent some time playing around with the current AI offerings for purposes other than business, it’s not clear to me which is more convenient and takes less time. Sometimes you’ll get lucky and get a good AI image the first try, but you could say the same about stock image sites. Where AI eliminates the need to go through pages and pages of stock images to find the right one, it replaces that with tweaking prompts and waiting for the images to generate. It should be noted that there is some learning curve to using AI as well. For instance, telling it to give you a “film still” or “photograph” if you want a representation of real life which isn’t meant to look illustrated and cartoonish. There’s a million of these tricks and each system has its own small library of commands which helps to be familiar with so you can get an optimal output. Ultimately, AI probably does take a little bit more time, but it also requires more skill. Mindlessly browsing for stock images is still much easier than trying to get a good output from a generative AI (although playing with AI is usually more fun).
Where stock images simply can’t compete at all is uniqueness. When you generate an image with an AI, it is a unique image. Every image generated is one of one. You don’t get the “oh, I’ve seen this before” feeling that you get with stock images, which is especially prevalent for life science / laboratory topics given the relatively limited supply of scientific stock images. We will probably, at some point in the not too distant future, get past the point of being able to identify an AI image meant to look real by the naked eye. Stock images have been around for over a century and the uniqueness problem has only become worse. It is inherent to the medium. The ability to solve that problem is what excites me most about using generative AI imagery for life science marketing.
The Experiment! Ground Rules
If this is going to be an experiment, it needs structure. Here is how it is going to work.
The image generators & stock photo sites used will be:
- Midjourney 5
- Dall-E 2
- Stable Diffusion 2.0
- iStock
- Getty Images
- Science Photo Library
I was going to include ShutterStock but there’s a huge amount of overlap with iStock, I often find iStock to have slightly higher-quality images, and I don’t want to make more of a project out of this than it is already going to be.
I will be performing 10 searches / generations. To allow for a mix of ideas and concepts, some will be of people, some will be of things, I’ll toss in some microscopy-like images, and some will be of concepts which would normally be presented in an illustrated rather than photographed format. With the disclaimer that these concepts are taken solely from my own thoughts in hope of trying to achieve a good diversity of concepts, I will be looking for the following items:
- A female scientist performing cell culture at a biosafety cabinet.
- An Indian male scientist working with an LC-MS instrument.
- An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
- A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
- An NGS instrument on a bench in a genomics lab.
- A high-magnification fluorescent micrograph of neural tissues.
- A colored scanning electron micrograph of carcinoma cells.
- A ribbon diagram of a large protein showing quaternary structure.
- A 3D illustration of plasmacytes releasing antibodies.
- An illustration of DNA methylation.
Such that nothing has an edge, none of these are things which I have recently searched for on stock image sites nor which I have previously attempted to generate using AI tools. Note that these are solely the ideas which I am looking for. These are not necessarily the exact queries used when generating AI images or searching the stock photo sites.
Looking for stock images and generating AI graphics are very different processes but they both share one critical dimension: time. I will therefore be limiting myself to 5 minutes on each platform for each image. That’s a reasonable amount of time to try to either find a stock image or get a decent output from an AI. It will also ensure this experiment doesn’t take me two days. Here we go…
Round 1: A female scientist performing cell culture at a biosafety cabinet.
One thing that AI image generators are really bad at in the context of the life sciences is being able to identify and reproduce specific things. I thought that this one wouldn’t be too hard because these models are in large part trained on stock images and there’s a ton of stock images of cell culture, many of which look fairly similar. I quickly realized that this was going to be an exercise in absurdity and hilarity when DALL-E gave me a rack of 50 ml Corning tubes made of Play-Doh. I would be doing you a grave disservice if I did not share this hilarity with you, so I’ll present not only the best images which I get from each round, but also the worst. And oh, there are so many.
I also realized that the only real way to compensate for this within the constraints of a 5-minute time limit is to mash the generate button as fast as I can. When your AI only has a vague idea of what a biosafety cabinet might look like and it’s trying to faithfully reproduce them graphically, you want it to be able to grasp at as many straws as possible. Midjourney gets an edge here because I can run a bunch of generations in parallel.
Now, without further ado, the ridiculous ones…
Round 1 AI Fails
Dall-E produced a large string of images which looked less like cell culture than women baking lemon bars.


Midjourney had some very interesting takes on what cell culture should look like. My favorite is the one that looks like something in a spaceship and involves only machines. The woman staring at her “pipette” in the exact same manner I am staring at this half-pipette half-lightsaber over her neatly arranged, unracked tubes is pretty good as well. Side note: in that one I specifically asked for her to be pipetting a red liquid in a biosafety cabinet. It made the gloves and tube caps red. There is no liquid. There is no biosafety cabinet.




For those who have never used it, Stable Diffusion is hilariously awful at anything meant to look realistic. If you’ve ever seen AI images of melted-looking people with 3 arms and 14 fingers, it was probably Stable Diffusion. The “best” it gave me were things that could potentially be biosafety cabinets, but when it was off, boy was it off…


Rule number one of laboratories: hold things with your mouth. (Yes we are obviously kidding, do not do that.)
That was fun! Onto the “successes.”
Round 1 AI vs. Stock
Midjourney did a wonderful job of creating realistic-looking scientists in labs that you would only see in a movie. Also keeping with the movie theme, Midjourney thinks that everyone looks like a model; no body positivity required. It really doesn’t want people to turn the lights on, either. Still, the best AI results, by a country mile, were from Midjourney.


The best Dall-E could do is give me something that you might confuse as cell culture at a biosafety cabinet if you didn’t look at it and were just looking past it as you turned your head.
Stable Diffusion’s best attempts are two things which could absolutely be biosafety cabinets in Salvador Dali world. Also, that scientist on the right may require medical attention.


Stock image sites, on the other hand, produce some images of cell culture in reasonably realistic looking settings, and it took me way less than 5 minutes to find each. Here are images from iStock, Getty Images, and Science Photo Library, in that order:



First round goes to the stock image sites, all of which produced a better result than anything I could coax from AI. Round goes to stock sites. AI 0 – 1 Stock.
Round 2: An Indian male scientist working with an LC-MS instrument.
I am not confident that AI is going to know what an LC-MS looks like. But let’s find out!
One notable thing that I found is that the less specific you become, the easier it gets for the AI. The below image was a response to me prompting Dall-E for a scientist working with an LC-MS, but it did manage to output a realistic looking person in an environment that could be a laboratory. It’s not perfect and you could pick it apart if you look closely, but it’s pretty close.
A generic prompt like “photograph of a scientist in a laboratory” might work great in Midjourney, or even Dall-E, but the point of this experiment would be tossed out the window if I set that low of a bar.
Round 2 AI Fails
Midjourney:


Dall-E:


Stable Diffusion is terrible. It’s difficult to tell the worst ones from the best ones. I was going to call one of these the “best” but I’m just going to put them all here because they’re all ridiculous.



Round 2 AI vs. Stock
Midjourney once again output the best results by far, and had some valiant efforts…


… but couldn’t match the real thing. Images below are from iStock, Getty Images, and Science Photo Library, respectively.



Once thing you’ve likely noticed is that none of these are Indian men! While we found good images of scientists performing LC-MS, we couldn’t narrow it down to both race and gender. Sometimes you have to take what you can get! We were generally able to find images which show more diversity, however, and it’s worth noting that Science Photo Library had the most diverse selection (although many of their images which I found are editorial use only, which is very limiting from a marketing perspective).
Round 2 goes to the stock sites. AI 0 – 2 Stock.
Round 3: An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
This should be easier all around.
Side note: I should’ve predicted this, but as the original query merely asked for science, my initial Midjourney query made it look like the lab was presenting something out of a sci-fi game. Looked cool, but not what we’re aiming for.
Round 3 AI Fails
Dall-E presented some interesting science on the genetic structure of dog kibble.

Dall-E seemed to regress with these queries, as if drawing more than one person correctly was just way too much to ask. It produced a huge stream of almost Picasso-esque people presenting something that vaguely resembled things which could, if sufficiently de-abstracted, be scientific figures. It’s as if it knows what it wants to show you but is drawing it with the hands of a 2 year old.



Stable Diffusion is just bad at this. This was the best it could do.


Round 3 AI vs. Stock
Take the gloves off, this is going to be a battle! While Midjourney continued its penchant for lighting which is more dramatic than realistic, it produced a number of beautiful images with “data” that, while they are extravagant for a lab meeting, could possibly be illustrations of some kind of life science. A few had some noticeable flaws – even Midjourney does some weird stuff with hands sometimes – but they largely seem usable. After all, the intent here is as a replacement for stock images. Such images generally wouldn’t be used in a way which would draw an inordinate amount of attention to them. And if someone does notice a small flaw that gives it away as an AI image, is that somehow worse than it clearly being stock? I’m not certain.




Stock images really fell short here. The problem is that people taking stock photos don’t have data to show, so they either don’t show anyone presenting anything, or they show them presenting something which betrays the image as generic stock. Therefore, to make them look like scientists, they put them in lab coats. Scientists, however, generally don’t wear lab coats outside the lab. It’s poor lab hygiene. Put a group of scientists in a conference room and it’s unusual that they’ll all be wearing lab coats.
That’s exactly what iStock had. Getty Images had an image of a single scientist presenting, but you didn’t see the people he was presenting to. Science Photo Library, which has far less to choose from, also didn’t have people presenting visible data. The three comps are below:



Side Note / ProTip: You can find that image from Getty Images, as well as many images that Getty Images labels as “royalty free” on iStock (or other stock image sites) for way less money. Getty will absolutely fleece you if you let them. Do a reverse image search to find the cheapest option.
Considering the initial idea we wanted to convey, I have to give this round to the AI. The images are unique, and while they lack some realism, so do the stock images.
Round 3 goes to AI. AI 1 – 2 Stock.
Let’s see if Dall-E or Stable Diffusion can do better in the other categories.
Round 4: A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
I’ve seen nice stock imagery of this before. Let’s see if AI can match it, and if I can readily find it again on the stock sites.
Round 4 AI Fails
Dall-E had a long string of images which looked like everything shown was made entirely of polystyrene and put in the autoclave at too high a temperature. You might have to click to expand to see the detail. It looks like everything partially melted, but then resolidified.



Stable Diffusion is more diffuse than stable. Three of these are the best that it did while the fourth is when it gave up and just started barfing visual static.




This is the first round where Midjourney, in my opinion, didn’t do the best job. Liquid handling systems have a fair amount of variability in how they can be presented, but pipette tips do not, and it didn’t seem to know what pipette tips should look like, nor how they would be arranged in a liquid handling system. These are the closest it got:


Very pretty! Not very accurate.
Round 4 AI vs. Stock
We have a new contestant for the AI team! Dall-E produced the most realistic looking image. Here you have it:

Not bad! Could it be an automated pipetting system? We can’t see it, but it’s possible. The spacing between the tips isn’t quite even and it looks like PCR strips rather than a plate, but hey, a microplate wasn’t part of the requirements here.
Let’s see what I can dig up for stock… Here’s iStock, Getty, and SPL, respectively:



I didn’t get the drips I was looking for – probably needed to dig more for that – but we did get some images which are obviously liquid handling systems in the process of dispensing liquids.
As valiant of an effort as Dall-E had, the images just aren’t clean enough to have the photorealism of real stock images. Round goes to the stock sites. AI 1 – 3 Stock.
Round 5: An NGS instrument on a bench in a genomics lab.
I have a feeling the higher-end stock sites are going to take this, as there aren’t a ton of NGS instruments so it might be overly specific for AI.
Round 5 AI Fails
Both Midjourney and Dall-E needed guidance that a next-generation sequencer wasn’t some modular device used for producing techno music.


With Dall-E, however, it proved to not be particularly trainable. I imagine it’s AI mind thinking: “Oh, you want a genome sequencer? How about if I write it for you in gibberish?” That was followed by it throwing it’s imaginary hands in the air and generating random imaginary objects for me.



Midjourney also had some pretty but far-out takes, such as this thing which looks much more like an alien version of a pre-industrial loom.
Round 5 AI vs. Stock
This gets a little tricky, because AI is never going to show you a specific genome sequencer, not to mention that if it did you could theoretically run into trademark issues. With that in mind, you have to give them a little bit of latitude. Genome sequencers come in enough shapes and sizes that there is no one-size-fits-all description of what one looks like. Similarly, there are few enough popular ones that unless you see a specific one, or its tell-tale branding, you might not know what it is. Can you really tell the function of one big gray plastic box from another just by looking at it? Given those constraints, I think Midjourney did a heck of a job:





There is no reason that a theoretical NGS instrument couldn’t look like any of these (although some are arguably a bit small). Not half bad! Let’s see what I can get from stock sites, which also will likely not want to show me logos.
iStock had a closeup photo of a Minion, which while it technically fits the description of what we were looking for, it doesn’t fit the intent. Aside from that it had a mediocre rendering of something supposed to be a sequencer and a partial picture of something rather old which might be an old Sanger sequencer?



After not finding anything at all on Getty Images, down to the wire right at the 5:00 mark I found a picture of a NovaSeq 6000. Science Photo Library had an image of an ABS SOLiD 4 on a bench in a lab with the lights off.


Unfortunately, Getty has identified the person in the image, meaning that even though you couldn’t ID the individual just by looking at the image, it isn’t suitable for commercial use. I’m therefore disqualifying that one. Is the oddly lit (and extremely expensive) picture of the SOLiD 4 or the conceptually off-target picture of the Minion better than what the AI came up with? I don’t think I can conclusively say either way, and one thing that I dislike doing as a marketer is injecting my own opinion where it shouldn’t be. The scientists should decide! For now, this will be a tie.
AI 1, Stock 3, Tie 1
Round 6: A high-magnification fluorescent micrograph of neural tissues.
My PhD is in neuroscience so I love this round. If Science Photo Library doesn’t win this round they should pack up and go home. Let’s see what we get!
Round 6 AI Fails
Dall-E got a rough, if not slightly cartoony, shape of neurons but never really coalesced into anything that looked like a genuine fluorescent micrograph (top left and top center in the image below). Stable Diffusion, on the other hand, was either completely off the deep end or just hoping that if it overexposed out-of-focus images enough that it could slide by (top right and bottom row).






Round 6 AI vs. Stock
Midjourney produced a plethora of stunning images. They are objectively beautiful and could absolutely be used in a situation where one only needed the concept of neurons rather than an actual, realistic-looking fluorescent micrograph.



They’re gorgeous, but they’re very obviously not faithful reproductions of what a fluorescent micrograph should look like.
iStock didn’t produce anything within the time limit. I found high-magnification images of neurons which were not fluorescent (probably colored TEM), fluorescent images of neuroblastomas (not quite right), and illustrations of neurons which were not as interesting as those above.
Getty Images did have some, but Science Photo Library had pages and pages of on-target results. SPL employees, you still have jobs.
AI 1, Stock 4, Tie 1
Round 7: A colored scanning electron micrograph of carcinoma cells.
This is another one where Science Photo Library should win handily, but there’s only one way to find out!
Round 7 AI Fails
None of the AI tools failed in such a spectacular way that it was funny. Dall-E produced results which suggested it almost understood the concept, although could never put it together. Here’s a representative selection from Dall-E:






… and from Stable Diffusion, which as expected was further off:



Round 7 AI vs. Stock
Midjourney actually got it, and if these aren’t usable, they’re awfully close. As with the last round, these would certainly be usable if you needed to communicate the concept of a colored SEM image of carcinoma cells more than you needed accurate imagery of them.





iStock didn’t have any actual SEM images of carcinomas which I could find within the time limit, and Midjourney seems to do just as good of a job as the best illustrations I found there:


Getty Images did have some real SEM images, but the ones of which I found were credited to Science Photo Library and their selection was absolutely dwarfed by SPL’s collection, which again had pages and pages of images of many different cancer cell types:
Here’s where this gets difficult. On one hand, we have images from Midjourney which would take the place of an illustration and which cost me less than ten cents to create. On the other hand, we have actual SEM images from Science Photo Library that are absolutely incredible, not to mention real, but depending on how you want to use them, would cost somewhere in the $200 – $2000 range per photo.
To figure out who wins this round, I need to get back to the original premise: Can AI replace stock in life science marketing? These images are every bit as usable as the items from iStock. Are they as good as the images from SPL? No, absolutely not. But are marketers always going to want to spend hundreds of dollars for a single stock photo? No, absolutely not. There are times when it will be worth it, but many times it won’t be. That said, I think I have to call this round a tie.
AI 1, Stock 4, Tie 2
Round 8: A ribbon diagram of a large protein showing quaternary structure.
This is something that stock photo sites should have in droves, but we’ll find out. To be honest, for things like this I personally search for images with friendly licensing requirements on Wikimedia Commons, which in this case gives ample options. But that’s outside the scope of the experiment so on to round 8!
Round 8 AI Fails
I honestly don’t know why I’m still bothering with Stable Diffusion. The closest it got was something which might look like a ribbon diagram if you took a massive dose of hallucinogens, but it mostly output farts.




Dall-E was entirely convinced that all protein structures should have words on them (a universally disastrous yet hilarious decision from any AI image generator) and I could not convince it otherwise:



This has always baffled me, especially as it pertains to DALL-E, since it’s made by OpenAI, the creators of Chat GPT. You would think it would be able to at least output actual words, even if used nonsensically, but apparently we aren’t that far into the future yet.
Round 8 AI vs. Stock
While Midjourney did listen when I told it not to use words and provided the most predictably beautiful output, they are obviously not genuine protein ribbon diagrams. Protein ribbon diagrams are a thing with a very specific look, and this is not it.





I’m not going to bother digging through all the various stock sites because there isn’t a competitive entry from team AI. So here’s a RAF-1 dimer from iStock, and that’s enough for the win.
AI 1, Stock 5, Tie 2. At this point AI can no longer catch up to stock images, but we’re not just interested in what “team” is going to “win” so I’ll keep going.
Round 9: A 3D illustration of plasmacytes releasing antibodies.
I have high hopes for Midjourney on this. But first, another episode of “Stable Diffusion Showing Us Things”!
Round 9 AI Fails
Stable Diffusion is somehow getting worse…






DALL-E was closer, but also took some adventures into randomness.



Midjourney wasn’t initially giving me the results that I hoped for, so to test if it understood the concept of plasmacytes I provided it with only “plasmacytes” as a query. No, it doesn’t know what plasmacytes are.
Round 9 AI vs. Stock
I should just call this Midjourney vs. Stock. Regardless, Midjourney didn’t quite hit the mark. Plasmacytes have an inordinately large number of ways to refer to them (plasma cells, B lymphocytes, B cells, etc.) and it did eventually get the idea, but it never looked quite right and never got the antibodies right, either. It did get the concept of a cell releasing something, but those things look nothing like antibodies.



I found some options on iStock and Science Photo Library (shown below, respectively) almost immediately, and the SPL option is reasonably priced if you don’t need it in extremely high resolution, so my call for Midjourney has not panned out.


Stock sites get this round. AI 1, Stock 6, Tie 2.
Round 10: An illustration of DNA methylation.
This is fairly specific, so I don’t have high hopes for AI here. The main question in my mind is whether stock sites will have illustrations of methylation specifically. Let’s find out!
Round 10 AI Fails
I occasionally feel like I have to fight with Midjourney to not be so artistic all the time, but adding things like “realistic looking” or “scientific illustration of” didn’t exactly help.



Midjourney also really wanted DNA to be a triple helix. Or maybe a 2.5-helix?



I set the bar extremely low for Stable Diffusion and just tried to get it to draw me DNA. Doesn’t matter what style, doesn’t need anything fancy, just plain old DNA. It almost did! Once. (Top left below.) But in the process it also created a bunch of abstract mayhem (bottom row below).






With anything involving “methylation” in the query, DALL-E did that thing where it tries to replace accurate representation with what it thinks are words. I therefore tried to just give it visual instructions, but that proved far too complex.



Round 10 AI vs. Stock
I have to admit, I did not think that it was going to be this hard to get reasonably accurate representations of regular DNA out of Midjourney. It did produce some, but not many, and the best looked like it was made by Jacob the Jeweler. If methyl groups look like rhinestones, 10/10. Dall-E did produce some things that look like DNA stock images circa 2010. All of these have the correct helix orientation as well: right handed. That was a must.



iStock, Getty Images, and Science Photo Library all had multiple options for images to represent methylation. Here are one from each, shown in the aforementioned order:



The point again goes to stock sites.
Final Score: AI 1, Stock 7, Tie 2.
Conclusion / Closing Thoughts
Much like generative text AI, generative image AI shows a lot of promise, but doesn’t yet have the specificity and accuracy needed to be broadly useful. It has a way to go before it can reliably replace stock photos and illustrations of laboratory and life science concepts for marketing purposes. However, for concepts which are fairly broad or in cases where getting the idea across is sufficient, AI can sometimes act as a replacement for basic stock imagery. As for me, if I get a good feeling that AI could do the job and I’m not enthusiastic about the images I’m finding from lower-cost stock sites, I’ll most likely give Midjourney a go. Sixty dollars a month gets us functionally infinite attempts, so the value here is pretty good. If we get a handful of stock images out of it each month, that’s fine – and there’s some from this experiment we’ll certainly be keeping on hand!
I would not be particularly comfortable about the future if I was a stock image site, but especially for higher-quality or specialized / more specific images, AI has a long ways to go before it can replace them.
Unfortunately, Google has attempted to make them ubiquitous.
Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.
Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.
The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.
As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.
Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.

Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.
A Select Few Auto-Apply Recommendations Are Worth Leaving On
Google has a total of 23 recommendation types. Of those, I always leave on:
- Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
- Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
- Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.
A whole 3/24. Some others are situationally useful, however:
- Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
- Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.
That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.
If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.
Why not leverage our understanding to your benefit? Contact Us."
I know this isn’t going to apply to 90% of you, and to anyone who is thinking “of course – why would anyone do that?” – I apologize for taking your time. Those people who see this as obvious can stop reading. What that 90% may not know, however, is that the other 10% still think, for some terrible reason, that hosting their own videos is a good idea. So, allow me to state conclusively:
Hosting your own videos is always a terrible decision. Let’s elaborate.
Reasons Why Hosting Your Own Videos Is A Terrible Decision:
- Your audience is not patient. If you think they’re going to wait through more than one or two (if you’re lucky) periods of buffering, you’re wrong. Videos are expensive to produce. If you’re putting in the resources to make a video, chances are you want as much of your audience as possible to see it. Buffering will ensure they don’t.
- Your servers are not built for this. Your website is most likely hosted on a server which is designed to serve up webpages. Streaming video content is probably not your host’s cup of tea. In fact, they’d probably rather you not do it (or tell you to buy a super-expensive hosting plan to accommodate the bandwidth requirements of streaming video).
- Your video compression is probably terrible. Your video editing software certainly will export your video into a compressed file. “Compressed,” in this sense, means not the giant, unwieldy raw data file that you would otherwise have. It does not mean “small enough to stream effectively.” You know whose video compression is next-level from anything else you’re going to find? YouTube, Vimeo, or probably most other major services that stream video on the internet as a business.
- There are companies that do this professionally. When I was in undergrad and majoring in chemical engineering, the other majors jokingly referred to us as “glorified plumbers,” but I don’t touch pipes. I don’t know the first thing about plumbing. So what do I do when I get a leak? I call a plumber, because they’ll definitely solve the problem far better than I would. Likewise, if you want to host video, why not get a professional video hosting service? There’s plenty of them out there, including some that are both very reputable and inexpensive.
An Example
I’m at my office on a reasonably fast internet connection. It’s cable, not fiber optic, but it’s also 11:30 in the morning – not prime “Netflix and chill” time when the intertubes are clogged up with people binge watching a full season of House of Cards. Just to show you that any bandwidth problems aren’t on my end, I did an Ookla Speedtest:

The internet is fast.
239 Mbps. Not tech school campus internet kind of fast, but more than fast enough to stream multiple YouTube videos at 4k if I wanted to.
And now for the example… I’m not going to tell you whose video this is, but they have an ~1 minute long video to show how easy their product is to use. Luckily for me, they don’t have a lot of branding on it so I can use them as an example without shaming them. The below screenshots are where the video stopped to buffer. Note that the video was not fullscreened and was about 1068 x 600. You can click the images to see them full size and see the progress bar and time at the bottom.

37 seconds. There’s no way I’d still be watching this if I wasn’t doing this for the purposes of demonstration.

“Done” … or not quite done. 56 seconds. I don’t even know why it stopped to buffer here as almost the entire rest of the video was already downloaded.
The video stopped playing 7 times in the span of 64 seconds.
What To Do Instead
Perhaps the most well-known paid video hosting service, Vimeo has a pro subscription that will allow you to embed ad-free videos without their branding on it for $20 / month. There’s a bunch of other, similar services out there as well. Or, if you don’t want to spend anything and don’t mind the possibility of an ad being shown prior to your video, you can just embed YouTube videos. The recommended videos which show after playback can be easily turned off in the embed options. You can even turn off the video title and player controls if you don’t want your audience to be able to click through to YouTube or see the bar at the bottom (although the latter also makes them unable to navigate through your video).
Basically, if you want your videos to actually get watched, do anything other than hosting them yourself.
P.S. – If you’ve read all this and still think hosting your own videos is the correct solution, which it’s not, here’s a tip: upload them to YouTube, then download them using a tool like ClipConverter. This way you’ll at least get the benefit of YouTube’s video compression, which is probably the best in the world.
Principal Consultant Carlton Hoyt recently sat down with Chris Conner for the Life Science Marketing Radio podcast to talk about decision engines, how they are transforming purchasing decisions, and what the implications are for life science marketers. The recording and transcript are below.
Transcript
CHRIS: Hello and welcome back. Thank you so much for joining us again today. Today we’re going to talk about decision engines. These are a way to help ease your customer’s buying process when there are multiple options to consider. So we’re going to talk about why that’s important and the considerations around deploying them. So if you offer lots and lots of products and customers have choices to make about the right ones, you don’t want to miss this episode.
(more…)
Marketers are used to seeing a lot of data showing that improving personalization leads to improved demand generation. The more you tailor your message to the customer, the more relevant that message will be and the more likely the customer will choose your solution. Sounds reasonable, right?
In most cases personalization is great, but what those aforementioned studies and all the “10,000-foot view” data misses is that there are a subset of customers for whom personalization doesn’t help. There are times when personalization can actually hurt you.
When Personalization Backfires
Stressing the points which are most important to an individual works great … when that individual has sole responsibility for the purchasing decision. For large or complex purchases, however, that is often not the case. When different individuals involved in a purchasing decision have different priorities and are receiving different messages tailored to their individual needs, personalization can act as a catalyst for divergence within the group, leading different members to reinforce their own needs and prevent consensus-building.
Marketers are poor at addressing the problems in group purchasing. A CEB study of 5000 B2B purchasers found that the likelihood of any purchase being made decreases dramatically as the size of the group making the decision increases; from an 81% likelihood of purchase for an individual, to just 31% for a group of six.
For group purchases, marketers need to focus less on personalization and more on creating consensus.
Building Consensus for Group Purchases
Personalization reinforces each individual’s perspective. In order to more effectively sell to groups, marketers need to reinforce shared perspectives of the problem and the solution. Highlight areas of common agreement. Use common language. Develop learning experiences which are relevant to the entire group and can be shared among them.
Personalization focuses on convincing individuals that your solution is the best. In order to better build consensus, equip individuals with the tools and information they need to provide perspective about the problem to their group. While most marketers spend their time pushing their solution, the CEB found that the sticking point in most groups is agreeing upon the nature of the solution that should be sought. By providing individuals within the groups who may favor your solution with the ability to frame the nature of the problem to others in their group, you’ll help those who have a nascent desire to advocate for you advocates get past this sticking point and guide the group to be receptive of your type of solution. Having helped them clear that critical barrier, you’ll be better positioned for the fight against solely your direct competitors.
Winning a sale requires more than just understanding the individual. We’ve been trained to believe that personalization is universally good, but that doesn’t align with reality. For group decisions, ensure your marketing isn’t reinforcing the individual, but rather building consensus within the group. Only then can you be reliably successful at not only overcoming competing companies, but overcoming the greatest alternative of all: a decision not to purchase anything.
Affinity has a transformational value on brands.
Google, Facebook, Apple and Amazon have all moved beyond having a simple transactional relationship with their customers to one that creates intimacy and serves their needs in a more holistic manner. These companies are generous, they are unselfish, and their approach is well beyond one of asking for the next sale. Whereas most companies self-promote in order to obtain the customer’s next purchase, elite brands seek not only to create customer loyalty, but to be loyal to their customers.
The overwhelming majority of companies are only good at fostering transactional affiliations with customers. They ask for their business, the customer gives it to them, and that is largely the end of the relationship. Companies frequently try to obtain repeat business; those who do so well attract supporters – customers who have moved beyond individual transactions and consciously prefer your brand, buying repeatedly. Relatively few companies are effective at recruiting promoters, people who actively share their positive impression of your brand through advocacy to others. Those brands which have strong networks of promoters are often very successful, but there is a fourth level of customer affinity that not only drives even further degrees of loyalty, but also leverages customer assets to build brand value even further, creating a positive feedback loop for both the brand and customers: co-creation.
Co-creators actively add value to the brand by contributing to its offerings for other customers. They are so invested in the brand that they add to it themselves. This may be altruistic, but may also be to realize some kind of return, be it financial, recognition, or otherwise.
Increasing Affinity
Most companies pay careful attention to how loyal their customers are to them, measuring things like net promoter score and tracking sentiment on social media. They think that good customer service will win the loyalty of customers, and while good customer experiences may turn transactors into supporters and perhaps even the occasional promoter, good service is not enough to routinely transform customers’ affinity to the highest levels. In order to move up the affinity ladder, brands need to not only focus on how loyal their customers are, but how loyal the brand is to their customers. If a customer is anything more than a transactor, they are giving you more than money. Likewise, you need to be doing something more than selling products and services (in other words, creating transactions) to better foster that affinity. You need to actively add value to the lives of your customers outside of the transactional realm.
Building co-creation opportunities often, but not always, requires a degree of altruism. You must seek to provide opportunities for your target market which do not actually cost them anything.
Examples of Co-Creation
Many businesses are built entirely around co-creation. Yelp or any user-driven recommendation website are almost entirely based on co-creation. Facebook is driven by co-creation. Airbnb is a co-creative endeavor, relying on its hosts to build the success of their platform. Your business, however, does not need to be centered on a co-creation business model in order to leverage it for increased customer affinity.
Customer-centric resources are tools that any company can use to greatly heighten customer affinity. By helping customers solve problems outside the context of a buying journey, you will provide massively positive experiences that will increase affinity. While resources do not require a co-creation component, such a component may be integrated into them. Consider the Nike+ ecosystem, where users can share workouts, compare progress with friends, and help motivate each other. The GoPro Channel is another well-known co-creation resource, where GoPro leverages its own popularity to support its customers’ best creations.
Social Media, “Engagement” and the Affinity Failure
Many marketers consider themselves to have succeeded at forging relationships with customers if they have high “engagement” metrics or large social followings. These are not indicators of affinity and are often vanity metrics. A social follow is by no means an indication of support, and it certainly does not suggest that the follower will promote your brand. In the life sciences and most B2B industries, social media is largely a platform for the dissemination of content. It is a utilitarian tool. While the ability to foster personal relationships with members of your target audience certainly exists, social media is not a natural channel for brand-customer communication. If your goals are to increase your audience size and reach, seek new social followers. If your goals are to increase customer affinity, look for non-transactional ways to provide value to your audience.
As customers not only take greater control of their purchasing decision journeys but compress them as well, brand affinity becomes increasingly important. Those brands which are able to create heightened levels of customer affinity will have immense advantage in an accelerated journey which reduces the consideration and evaluation phases. Customers are increasingly making decisions based on established preferences. The brands with the greatest customer affinity will be the winners.
We recently cited some newly released findings from the Boston Consulting Group (BCG) stating that “display retargeting from paid search ads can deliver a 40 percent reduction in CPA.” It was met with some hesitation from Mariano Guzmán of Laboratorios Conda, who stated:
“[…] when I have clicked on a [life science website] what I have experienced is a tremendous amount of retargeting for 1 month that I have not liked at all as an internet user, and I do not feel my clients would as well”
Being me, I like to answer questions with facts as much as possible, so I dug some up. This one’s for you, Mariano!
To directly address Mariano’s concern, I found some studies on people’s opinions on retargeting. A 2012 Pew Research Study found that 68% of people are “not okay with it” due to behavior tracking while 28% are “okay with it” because of more relevant ads and information (4% had no opinion). I’m a little skeptical of the Pew study because they were priming the audience with reasons to “be okay” or “not be okay” with remarketing. In a sense, these people are choosing between behavior tracking + more relevant ads vs. no behavior tracking + less relevant ads. However, when users actually see the ads the ads don’t say to the viewer “by the way, we’re tracking your behavior.” Are some users aware of this? Certainly. Might some think it consciously? On occasion, sure, but nowhere near 100% of the time. However, 100% of the Pew study respondents were aware of it.
A slightly more recent 2013 study commissioned by Androit Digital and performed by Toluna asked the qusestion in a much more neutral manner (see page three of the linked-to study). They found that 30% have a positive impression about a brand for which they see retargeting ads, only 11% have a negative impression, and 59% have a neutral impression.
The Pew study and the Androit Digital study did agree on one thing – remarketing ads get noticed. In both, almost 60% of respondents noticed ads that were related to previous sites visited or products viewed.
Now to the undeniably positive side… The gains a company stands to make from remarketing.
In addition to the 40% reduction in cost per action cited in the aforementioned BCG study, a 2014 report from BCG entitled “Adding Data, Boosting Impact: Improving Engagement and Performance in Digital Advertising” found that retargeting improves overall CPC by 10%.
A 2010 comScore study evaluated the change in branded search queries for different types of digital advertising and found retargeting had provided the largest increase: 1046%.
In a 2011 Wall Street Journal article, Sucharita Mulpuru, an analyst at Forrester Research, stated that retail conversion rates are 3% on PCs and 4% to 5% on tablets. According to the National Retail Federation, 8% of customers will return to make a purchase on their own. Retargeting increases that number more than three-fold, to 26%.
There are many more studies that sing the praises of remarketing, however I wanted to stay away from case studies that investigate only single companies as well as data collected and presented by advertising service providers.
Here are my thoughts on the matter: Do some customers view retargeting unfavorably? Certainly, but that’s the nature of advertising. No matter what form it takes, some people will object to it. Considering that there is nothing ethically wrong with retargeting, we can’t give up on something that is proven to be a highly effective tactic because some people have an objection to it. In the end, it’s our job as marketers to help create success for the organizations we serve.
The most precious and limited resource that life science marketers and salespeople must fight for is undoubtedly money. Everyone is trying to get a piece of those often set-in-stone lab budgets. However, before that battle is an equally important one; one involving a resource that is almost as scarce and becoming scarcer. That battle is for the attention of your audience.
Attention is a resource that is inherently limited. Each person only has so many hours in the day. As more companies (and other distractions) vie for their attention, it behaves like any limited resource under increasing demand – the cost goes up.
Most marketing campaigns ignore this fact. They’re built under the assumption that the audience will care about what you have to say, but that’s a very poor assumption to make in most circumstances. Perhaps in a world of unlimited time and attention that would be the case, but will the audience care more about what you have to say than all the other things that are vying for their attention at that point in time? Put in that perspective, the answer is often a clear “no.”
So what can we do to obtain and keep scientists’ attention such that our messages even have a chance of getting through? How do we ensure that we have enough attention to effectively educate and persuade them that our viewpoints are correct and they should purchase from us? In addition to creating the standard campaign elements, you need to build in a mechanism to ensure you’re doing the following…
Step 1: Captivate
Interruptions can be easily ignored. We’re all trained to do it. Think about it… How many banner advertisements do you see in a day? How many email promotions? How many TV commercials or magazine ads or billboards? Now how many do you actually pay attention to? How many can you remember?
The lesson here is that interruptions are very ineffective. However, unless you’ve already built a large audience or community, you’re pretty much limited to interruption tactics. Those tactics will get the audience’s attention infrequently, so you have to make it matter. The first thing you need to do when you get that scarce bit of attention is ensure you’ll get it for more than a fleeting moment. You need to captivate your audience.
The worst thing that you can do – which most marketers do anyway – is start by expressing a “what” statement. In general, your audience does not care about what you are or what you’re selling (yet). You need to lead off with a statement of belief – a “why” statement – that will be both emotionally compelling to the audience and subject to agreement by them.
Step 2: Hold
That first interaction won’t last forever, so you need to ensure that you’ll be able to reclaim their attention when you next need it. That first interaction must create recognition of need. The need doesn’t have to be for your product or service, but rather for the information to follow. They need to understand that there is more to learn and future information will benefit them.
The most common way for a campaign to execute this is with an email signup followed by drip marketing. This runs into the problem of requiring their attention at a specific point in time. Once an email gets put aside for later, it becomes far less likely to be read. Support your continued communications with other means of reminding the audience, such as automatically triggered reminder emails or display remarketing ads.
(Quick side note: people are more likely to respond to loss than to gain. If you’re having trouble crafting messages that keep the audience’s attention, play off this loss aversion. Tell the audience what they are currently or losing rather than what they might gain.)
Step 3: Build
There will always be people who would likely buy from you at some point in time, but cannot or will not buy now. You want to be able to retain their attention to make purchase at a later date more likely. Even for those that do buy, you want to ensure you utilize your command of their current attention to make it easier to regain their attention later.
As interruption marketing becomes less effective, you need to ensure you have a pool of people who have given you permission to get their attention. This can be done by creating valuable resources for your market which are likely to be repeatedly referenced and revisited. It can be done through community-building efforts. It can be done through regular distribution of high-quality content. Whatever you’re doing, it needs to be something that makes your audience want to come back for more. Ideally, your continuous re-engagement efforts should also be on a channel that you control to ensure that you won’t have any trouble getting promotional messages across when you need to and you can exert control over the channel to ensure it remains of high value for the audience.
You can’t convey a message unless you have your audience’s attention. The next time you’re creating a campaign, be sure that you build in a capacity to captivate the audience and retain their attention.