logo

Category : Digital Marketing

Can DALL-E 3 Generate Passable Life Science Images?

For those uninitiated to our blog, a few months ago I ran a fairly extensive, structured experiment to compare DALL-E 2, Midjourney 5, and Stable Diffusion 2 to see if any of them could potentially replace generic life science stock imagery. It ended up being both informative and accidentally hilarious, and you can see the whole thing here. But that was back in the far-gone yesteryear of July, it is currently December, and we live in the early era of AI which means that months are now years and whatever happened 5 months ago is surely obsolete. Since Dall-E 3 came out in October, it’s worth finding out if it will do better than it did in the previous round, where DALL-E 2 was notably inferior to Midjourney for 9 of the 10 queries.

Perhaps I’ll do a more comprehensive comparison again later, but for now I’m just going to run some similar queries to the ones used last time to get a reasonable side-by-side comparison. Bing Image Creator was used to generate the images since labs.openai.com, which was used last time, is still plugged in to DALL-E 2.

Test 1: A female scientist performing cell culture at a biosafety cabinet.

The last time we tried this, DALL-E 2 gave us images that looked 75% like a picture and 25% like claymation, but even if that problem wasn’t there it was still somewhat far off. Let’s see if DALL-E 3 can do better.

I tried to be a little bit descriptive with these prompts, as supposedly DALL-E 3 uses GPT4 and better understands written requests. Supposedly. Here’s what it gave me for “A photograph of a female scientist in a laboratory sitting at a biosafety cabinet holding a serological pipette performing cell culture. Her cell culture flasks have yellow caps and her cell culture media is red.” It definitely got the yellow caps and red media. As for the rest…

It’s immediately clear that DALL-E 3, just like all its ilk, was primarily trained from large repositories of generic stock images, because all these labs look like what you would imagine a lab would look like if you didn’t know what a lab actually looked like. There are plenty of generic microscopes close at hand, although it didn’t even get those right. There are no biosafety cabinets to be found. Those vessels are essentially test tubes, not cell culture flasks. To top it off, all the female scientists look like porcelain dolls modeling for the camera. I tried to fix at least one of those things and appended “She is attentive to her work.” to the subsequent query. Surprisingly, this time it seemed to make some subtle attempts at things which might be construed as biosafety cabinets, but only to a completely naive audience (and, of course, it put a microscope in one of them).

Since DALL-E 2 arguably provided more realistic looking people in our previous test, I reverted to the simplicity of the previously used query: “A photograph of a female scientist performing cell culture at a biosafety cabinet.”

We’re not getting any closer. I have to call this an improvement because it doesn’t look like the image is melting, but it’s still very far from usable for a multitude of reasons: the plasticware is wrong, the pipettes are wrong, the people still look like dolls, the biosafety cabinets aren’t right, some of the media seems to be growing alien contamination, the background environment isn’t realistic, etc.

Today’s comic relief is brought to you by my attempt to get it to stop drawing people as porcelain dolls. I Googled around a bit and found that queries structured differently sometimes are better at generating realistic looking people so I gave this prompt a go: “2023, professional photograph. a female scientist performing cell culture at a biosafety cabinet.” What a gift it gave me.

Test 2: Liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’m choosing this one because it was the only query that DALL-E 2 was almost good at in our previous comparison. Out of 10 tests in that experiment, Midjourney produced the best output 9 times and DALL-E once. This was that one. However, stock imagery was still better. DALL-E 2’s image didn’t capture any of the liquid handler and the look of the image was still a bit melty. Let’s see if it’s improved!

Prompt: “A close up photograph of liquid dripping from pipette tips on a high-throughput automated liquid handling system.”

DALL-E 3 seems to have eschewed realism entirely and instead picked up Midjourney’s propensity for movie stills and sci-fi. Perhaps more specificity will solve this.

Prompt 2: “A close up photograph of liquid being dispensed from pipette tips into a 96-well microplate in a high-throughput automated liquid handling system.”

DALL-E clearly only has a vague idea of what a 96-well plate looks like and also cannot count; none of these “plates” actually have 96 wells. Regardless, these are no more realistic, clearly unusable, and DALL-E 2’s output would likely have a far greater probability of passing as real.

So nope, we’re still not there yet, and Midjourney is probably still the best option for realistic looking life science images based on what I’ve seen so far.

… but what about micrographs and illustrations?

All the previous posts dealt with recreations of real-world images. What about images which a microscope would take or scientific illustrations? To test that out, I quickly tested out four prompts I had used last time:

  • A high-magnification fluorescent micrograph of neural tissues
  • A colored scanning electron micrograph of carcinoma cells
  • A ribbon diagram of a large protein showing quaternary structure
  • A 3D illustration of plasmacytes releasing antibodies

Here is the best it provided for each, in clockwise order from top left:

DALL-E 3’s neurons were actually worse than DALL-E 2’s, with nothing even being remotely close. It’s carcinomas were more in line with what Midjourney provided last time, but look slightly more cartoonish. The ribbon diagram is the better than any from the last test, although the structure is blatantly unrealistic. It’s plasmacytes could make for a passable graphic illustration, if only they contained anything that looks like antibodies.

Conclusion

DALL-E 3 is a clear improvement from DALL-E 2. While it may be two steps forward and one step back, overall it did provide outputs which were closer to being usable than in our last test. It still has a way to go, and I don’t think it will peel us away from defaulting to Midjourney, but if it continues to improve at this rate, DALL-E 4 could provide a breakthrough for the generation of life science stock images.

"Want brand to shine brighter than even DALL-E could imagine? Contact BioBM. We’ll win you the admiration and attention of your scientist customers."

Can AI Replace Life Science / Laboratory Stock Images?

We’re over half a year into the age of AI, and its abilities and limitations for both text and image generation are fairly well-known. However, the available AI platforms have had a number of improvements over the past months, and have become markedly better. We are slowly but surely getting to the point where generative image AIs know what hands should look like.

But do they know what science looks like? Are they a reasonable replacement for stock images? Those are the meaningful questions if they are going to be useful for the purposes of life science marketing. We set to answer them.

A Few Notes Before I Start Comparing Things

Being able to create images which are reasonably accurate representations is the bare minimum for the utility of AI in replacing stock imagery. Once we move past that, the main questions are those of price, time, and uniqueness.

AI tools are inexpensive compared with stock imagery. A mid-tier stock imagery site such as iStock or ShutterStock will charge roughly $10 per image if paid with credits or anywhere from $7 to roughly a quarter per image if you purchase a monthly subscription. Of course, if you want something extremely high-quality, images from Getty Images or a specialized science stock photo provider like Science Photo Library or ScienceSource can easily cost many hundreds of dollars per image. In comparison, Midjourney’s pro plan, which is $60 / month, gives you 30 hours of compute time. Each prompt will provide you with 4 images and generally takes around 30 seconds. You could, in theory, acquire 8 images per minute, meaning each costs 0.4 cents. (In practice, with the current generation of AI image generation tools, you are unlikely to get images which match your vision on the first try.) Dall-E’s pricing is even simpler: each prompt is one credit, also provides 4 images, and credits cost $0.13 each. Stable Diffusion is still free.

Having used stock image sites extensively, and having spent some time playing around with the current AI offerings for purposes other than business, it’s not clear to me which is more convenient and takes less time. Sometimes you’ll get lucky and get a good AI image the first try, but you could say the same about stock image sites. Where AI eliminates the need to go through pages and pages of stock images to find the right one, it replaces that with tweaking prompts and waiting for the images to generate. It should be noted that there is some learning curve to using AI as well. For instance, telling it to give you a “film still” or “photograph” if you want a representation of real life which isn’t meant to look illustrated and cartoonish. There’s a million of these tricks and each system has its own small library of commands which helps to be familiar with so you can get an optimal output. Ultimately, AI probably does take a little bit more time, but it also requires more skill. Mindlessly browsing for stock images is still much easier than trying to get a good output from a generative AI (although playing with AI is usually more fun).

Where stock images simply can’t compete at all is uniqueness. When you generate an image with an AI, it is a unique image. Every image generated is one of one. You don’t get the “oh, I’ve seen this before” feeling that you get with stock images, which is especially prevalent for life science / laboratory topics given the relatively limited supply of scientific stock images. We will probably, at some point in the not too distant future, get past the point of being able to identify an AI image meant to look real by the naked eye. Stock images have been around for over a century and the uniqueness problem has only become worse. It is inherent to the medium. The ability to solve that problem is what excites me most about using generative AI imagery for life science marketing.

The Experiment! Ground Rules

If this is going to be an experiment, it needs structure. Here is how it is going to work.

The image generators & stock photo sites used will be:

I was going to include ShutterStock but there’s a huge amount of overlap with iStock, I often find iStock to have slightly higher-quality images, and I don’t want to make more of a project out of this than it is already going to be.

I will be performing 10 searches / generations. To allow for a mix of ideas and concepts, some will be of people, some will be of things, I’ll toss in some microscopy-like images, and some will be of concepts which would normally be presented in an illustrated rather than photographed format. With the disclaimer that these concepts are taken solely from my own thoughts in hope of trying to achieve a good diversity of concepts, I will be looking for the following items:

  1. A female scientist performing cell culture at a biosafety cabinet.
  2. An Indian male scientist working with an LC-MS instrument.
  3. An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.
  4. A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.
  5. An NGS instrument on a bench in a genomics lab.
  6. A high-magnification fluorescent micrograph of neural tissues.
  7. A colored scanning electron micrograph of carcinoma cells.
  8. A ribbon diagram of a large protein showing quaternary structure.
  9. A 3D illustration of plasmacytes releasing antibodies.
  10. An illustration of DNA methylation.

Such that nothing has an edge, none of these are things which I have recently searched for on stock image sites nor which I have previously attempted to generate using AI tools. Note that these are solely the ideas which I am looking for. These are not necessarily the exact queries used when generating AI images or searching the stock photo sites.

Looking for stock images and generating AI graphics are very different processes but they both share one critical dimension: time. I will therefore be limiting myself to 5 minutes on each platform for each image. That’s a reasonable amount of time to try to either find a stock image or get a decent output from an AI. It will also ensure this experiment doesn’t take me two days. Here we go…

Round 1: A female scientist performing cell culture at a biosafety cabinet.

One thing that AI image generators are really bad at in the context of the life sciences is being able to identify and reproduce specific things. I thought that this one wouldn’t be too hard because these models are in large part trained on stock images and there’s a ton of stock images of cell culture, many of which look fairly similar. I quickly realized that this was going to be an exercise in absurdity and hilarity when DALL-E gave me a rack of 50 ml Corning tubes made of Play-Doh. I would be doing you a grave disservice if I did not share this hilarity with you, so I’ll present not only the best images which I get from each round, but also the worst. And oh, there are so many.

I can’t withhold the claymation 50 ml Corning tubes from you. It would just be wrong of me.

I also realized that the only real way to compensate for this within the constraints of a 5-minute time limit is to mash the generate button as fast as I can. When your AI only has a vague idea of what a biosafety cabinet might look like and it’s trying to faithfully reproduce them graphically, you want it to be able to grasp at as many straws as possible. Midjourney gets an edge here because I can run a bunch of generations in parallel.

Now, without further ado, the ridiculous ones…

Round 1 AI Fails

Dall-E produced a large string of images which looked less like cell culture than women baking lemon bars.

Midjourney had some very interesting takes on what cell culture should look like. My favorite is the one that looks like something in a spaceship and involves only machines. The woman staring at her “pipette” in the exact same manner I am staring at this half-pipette half-lightsaber over her neatly arranged, unracked tubes is pretty good as well. Side note: in that one I specifically asked for her to be pipetting a red liquid in a biosafety cabinet. It made the gloves and tube caps red. There is no liquid. There is no biosafety cabinet.

For those who have never used it, Stable Diffusion is hilariously awful at anything meant to look realistic. If you’ve ever seen AI images of melted-looking people with 3 arms and 14 fingers, it was probably Stable Diffusion. The “best” it gave me were things that could potentially be biosafety cabinets, but when it was off, boy was it off…

Rule number one of laboratories: hold things with your mouth. (Yes we are obviously kidding, do not do that.)

That was fun! Onto the “successes.”

Round 1 AI vs. Stock

Midjourney did a wonderful job of creating realistic-looking scientists in labs that you would only see in a movie. Also keeping with the movie theme, Midjourney thinks that everyone looks like a model; no body positivity required. It really doesn’t want people to turn the lights on, either. Still, the best AI results, by a country mile, were from Midjourney.

The best Dall-E could do is give me something that you might confuse as cell culture at a biosafety cabinet if you didn’t look at it and were just looking past it as you turned your head.

Stable Diffusion’s best attempts are two things which could absolutely be biosafety cabinets in Salvador Dali world. Also, that scientist on the right may require medical attention.

Stock image sites, on the other hand, produce some images of cell culture in reasonably realistic looking settings, and it took me way less than 5 minutes to find each. Here are images from iStock, Getty Images, and Science Photo Library, in that order:

First round goes to the stock image sites, all of which produced a better result than anything I could coax from AI. Round goes to stock sites. AI 0 – 1 Stock.

Round 2: An Indian male scientist working with an LC-MS instrument.

I am not confident that AI is going to know what an LC-MS looks like. But let’s find out!

One notable thing that I found is that the less specific you become, the easier it gets for the AI. The below image was a response to me prompting Dall-E for a scientist working with an LC-MS, but it did manage to output a realistic looking person in an environment that could be a laboratory. It’s not perfect and you could pick it apart if you look closely, but it’s pretty close.

A generic prompt like “photograph of a scientist in a laboratory” might work great in Midjourney, or even Dall-E, but the point of this experiment would be tossed out the window if I set that low of a bar.

Round 2 AI Fails

Midjourney:

Dall-E:

Stable Diffusion is terrible. It’s difficult to tell the worst ones from the best ones. I was going to call one of these the “best” but I’m just going to put them all here because they’re all ridiculous.

Round 2 AI vs. Stock

Midjourney once again output the best results by far, and had some valiant efforts…

… but couldn’t match the real thing. Images below are from iStock, Getty Images, and Science Photo Library, respectively.

Once thing you’ve likely noticed is that none of these are Indian men! While we found good images of scientists performing LC-MS, we couldn’t narrow it down to both race and gender. Sometimes you have to take what you can get! We were generally able to find images which show more diversity, however, and it’s worth noting that Science Photo Library had the most diverse selection (although many of their images which I found are editorial use only, which is very limiting from a marketing perspective).

Round 2 goes to the stock sites. AI 0 – 2 Stock.

Round 3: An ethnically diverse group of scientists in a conference room holding a lab meeting. One scientist presents their work.

This should be easier all around.

Side note: I should’ve predicted this, but as the original query merely asked for science, my initial Midjourney query made it look like the lab was presenting something out of a sci-fi game. Looked cool, but not what we’re aiming for.

Round 3 AI Fails

Dall-E presented some interesting science on the genetic structure of dog kibble.

Dall-E seemed to regress with these queries, as if drawing more than one person correctly was just way too much to ask. It produced a huge stream of almost Picasso-esque people presenting something that vaguely resembled things which could, if sufficiently de-abstracted, be scientific figures. It’s as if it knows what it wants to show you but is drawing it with the hands of a 2 year old.

Stable Diffusion is just bad at this. This was the best it could do.

Round 3 AI vs. Stock

Take the gloves off, this is going to be a battle! While Midjourney continued its penchant for lighting which is more dramatic than realistic, it produced a number of beautiful images with “data” that, while they are extravagant for a lab meeting, could possibly be illustrations of some kind of life science. A few had some noticeable flaws – even Midjourney does some weird stuff with hands sometimes – but they largely seem usable. After all, the intent here is as a replacement for stock images. Such images generally wouldn’t be used in a way which would draw an inordinate amount of attention to them. And if someone does notice a small flaw that gives it away as an AI image, is that somehow worse than it clearly being stock? I’m not certain.

Stock images really fell short here. The problem is that people taking stock photos don’t have data to show, so they either don’t show anyone presenting anything, or they show them presenting something which betrays the image as generic stock. Therefore, to make them look like scientists, they put them in lab coats. Scientists, however, generally don’t wear lab coats outside the lab. It’s poor lab hygiene. Put a group of scientists in a conference room and it’s unusual that they’ll all be wearing lab coats.

That’s exactly what iStock had. Getty Images had an image of a single scientist presenting, but you didn’t see the people he was presenting to. Science Photo Library, which has far less to choose from, also didn’t have people presenting visible data. The three comps are below:

Side Note / ProTip: You can find that image from Getty Images, as well as many images that Getty Images labels as “royalty free” on iStock (or other stock image sites) for way less money. Getty will absolutely fleece you if you let them. Do a reverse image search to find the cheapest option.

Considering the initial idea we wanted to convey, I have to give this round to the AI. The images are unique, and while they lack some realism, so do the stock images.

Round 3 goes to AI. AI 1 – 2 Stock.

Let’s see if Dall-E or Stable Diffusion can do better in the other categories.

Round 4: A close up of liquid dripping from pipette tips on a high-throughput automated liquid handling system.

I’ve seen nice stock imagery of this before. Let’s see if AI can match it, and if I can readily find it again on the stock sites.

Round 4 AI Fails

Dall-E had a long string of images which looked like everything shown was made entirely of polystyrene and put in the autoclave at too high a temperature. You might have to click to expand to see the detail. It looks like everything partially melted, but then resolidified.

Stable Diffusion is more diffuse than stable. Three of these are the best that it did while the fourth is when it gave up and just started barfing visual static.

This is the first round where Midjourney, in my opinion, didn’t do the best job. Liquid handling systems have a fair amount of variability in how they can be presented, but pipette tips do not, and it didn’t seem to know what pipette tips should look like, nor how they would be arranged in a liquid handling system. These are the closest it got:

Very pretty! Not very accurate.

Round 4 AI vs. Stock

We have a new contestant for the AI team! Dall-E produced the most realistic looking image. Here you have it:

Not bad! Could it be an automated pipetting system? We can’t see it, but it’s possible. The spacing between the tips isn’t quite even and it looks like PCR strips rather than a plate, but hey, a microplate wasn’t part of the requirements here.

Let’s see what I can dig up for stock… Here’s iStock, Getty, and SPL, respectively:

I didn’t get the drips I was looking for – probably needed to dig more for that – but we did get some images which are obviously liquid handling systems in the process of dispensing liquids.

As valiant of an effort as Dall-E had, the images just aren’t clean enough to have the photorealism of real stock images. Round goes to the stock sites. AI 1 – 3 Stock.

Round 5: An NGS instrument on a bench in a genomics lab.

I have a feeling the higher-end stock sites are going to take this, as there aren’t a ton of NGS instruments so it might be overly specific for AI.

Round 5 AI Fails

Both Midjourney and Dall-E needed guidance that a next-generation sequencer wasn’t some modular device used for producing techno music.

With Dall-E, however, it proved to not be particularly trainable. I imagine it’s AI mind thinking: “Oh, you want a genome sequencer? How about if I write it for you in gibberish?” That was followed by it throwing it’s imaginary hands in the air and generating random imaginary objects for me.

Midjourney also had some pretty but far-out takes, such as this thing which looks much more like an alien version of a pre-industrial loom.

Round 5 AI vs. Stock

This gets a little tricky, because AI is never going to show you a specific genome sequencer, not to mention that if it did you could theoretically run into trademark issues. With that in mind, you have to give them a little bit of latitude. Genome sequencers come in enough shapes and sizes that there is no one-size-fits-all description of what one looks like. Similarly, there are few enough popular ones that unless you see a specific one, or its tell-tale branding, you might not know what it is. Can you really tell the function of one big gray plastic box from another just by looking at it? Given those constraints, I think Midjourney did a heck of a job:

There is no reason that a theoretical NGS instrument couldn’t look like any of these (although some are arguably a bit small). Not half bad! Let’s see what I can get from stock sites, which also will likely not want to show me logos.

iStock had a closeup photo of a Minion, which while it technically fits the description of what we were looking for, it doesn’t fit the intent. Aside from that it had a mediocre rendering of something supposed to be a sequencer and a partial picture of something rather old which might be an old Sanger sequencer?

After not finding anything at all on Getty Images, down to the wire right at the 5:00 mark I found a picture of a NovaSeq 6000. Science Photo Library had an image of an ABS SOLiD 4 on a bench in a lab with the lights off.

Unfortunately, Getty has identified the person in the image, meaning that even though you couldn’t ID the individual just by looking at the image, it isn’t suitable for commercial use. I’m therefore disqualifying that one. Is the oddly lit (and extremely expensive) picture of the SOLiD 4 or the conceptually off-target picture of the Minion better than what the AI came up with? I don’t think I can conclusively say either way, and one thing that I dislike doing as a marketer is injecting my own opinion where it shouldn’t be. The scientists should decide! For now, this will be a tie.

AI 1, Stock 3, Tie 1

Round 6: A high-magnification fluorescent micrograph of neural tissues.

My PhD is in neuroscience so I love this round. If Science Photo Library doesn’t win this round they should pack up and go home. Let’s see what we get!

Round 6 AI Fails

Dall-E got a rough, if not slightly cartoony, shape of neurons but never really coalesced into anything that looked like a genuine fluorescent micrograph (top left and top center in the image below). Stable Diffusion, on the other hand, was either completely off the deep end or just hoping that if it overexposed out-of-focus images enough that it could slide by (top right and bottom row).

Round 6 AI vs. Stock

Midjourney produced a plethora of stunning images. They are objectively beautiful and could absolutely be used in a situation where one only needed the concept of neurons rather than an actual, realistic-looking fluorescent micrograph.

They’re gorgeous, but they’re very obviously not faithful reproductions of what a fluorescent micrograph should look like.

iStock didn’t produce anything within the time limit. I found high-magnification images of neurons which were not fluorescent (probably colored TEM), fluorescent images of neuroblastomas (not quite right), and illustrations of neurons which were not as interesting as those above.

Getty Images did have some, but Science Photo Library had pages and pages of on-target results. SPL employees, you still have jobs.

A small selection from page 1 of 5.

AI 1, Stock 4, Tie 1

Round 7: A colored scanning electron micrograph of carcinoma cells.

This is another one where Science Photo Library should win handily, but there’s only one way to find out!

Round 7 AI Fails

None of the AI tools failed in such a spectacular way that it was funny. Dall-E produced results which suggested it almost understood the concept, although could never put it together. Here’s a representative selection from Dall-E:

… and from Stable Diffusion, which as expected was further off:

Round 7 AI vs. Stock

Midjourney actually got it, and if these aren’t usable, they’re awfully close. As with the last round, these would certainly be usable if you needed to communicate the concept of a colored SEM image of carcinoma cells more than you needed accurate imagery of them.

iStock didn’t have any actual SEM images of carcinomas which I could find within the time limit, and Midjourney seems to do just as good of a job as the best illustrations I found there:

Getty Images did have some real SEM images, but the ones of which I found were credited to Science Photo Library and their selection was absolutely dwarfed by SPL’s collection, which again had pages and pages of images of many different cancer cell types:

It just keeps going. There were 269 results.

Here’s where this gets difficult. On one hand, we have images from Midjourney which would take the place of an illustration and which cost me less than ten cents to create. On the other hand, we have actual SEM images from Science Photo Library that are absolutely incredible, not to mention real, but depending on how you want to use them, would cost somewhere in the $200 – $2000 range per photo.

To figure out who wins this round, I need to get back to the original premise: Can AI replace stock in life science marketing? These images are every bit as usable as the items from iStock. Are they as good as the images from SPL? No, absolutely not. But are marketers always going to want to spend hundreds of dollars for a single stock photo? No, absolutely not. There are times when it will be worth it, but many times it won’t be. That said, I think I have to call this round a tie.

AI 1, Stock 4, Tie 2

Round 8: A ribbon diagram of a large protein showing quaternary structure.

This is something that stock photo sites should have in droves, but we’ll find out. To be honest, for things like this I personally search for images with friendly licensing requirements on Wikimedia Commons, which in this case gives ample options. But that’s outside the scope of the experiment so on to round 8!

Round 8 AI Fails

I honestly don’t know why I’m still bothering with Stable Diffusion. The closest it got was something which might look like a ribbon diagram if you took a massive dose of hallucinogens, but it mostly output farts.

Dall-E was entirely convinced that all protein structures should have words on them (a universally disastrous yet hilarious decision from any AI image generator) and I could not convince it otherwise:

This has always baffled me, especially as it pertains to DALL-E, since it’s made by OpenAI, the creators of Chat GPT. You would think it would be able to at least output actual words, even if used nonsensically, but apparently we aren’t that far into the future yet.

Round 8 AI vs. Stock

While Midjourney did listen when I told it not to use words and provided the most predictably beautiful output, they are obviously not genuine protein ribbon diagrams. Protein ribbon diagrams are a thing with a very specific look, and this is not it.

I’m not going to bother digging through all the various stock sites because there isn’t a competitive entry from team AI. So here’s a RAF-1 dimer from iStock, and that’s enough for the win.

AI 1, Stock 5, Tie 2. At this point AI can no longer catch up to stock images, but we’re not just interested in what “team” is going to “win” so I’ll keep going.

Round 9: A 3D illustration of plasmacytes releasing antibodies.

I have high hopes for Midjourney on this. But first, another episode of “Stable Diffusion Showing Us Things”!

Round 9 AI Fails

Stable Diffusion is somehow getting worse…

DALL-E was closer, but also took some adventures into randomness.

Midjourney wasn’t initially giving me the results that I hoped for, so to test if it understood the concept of plasmacytes I provided it with only “plasmacytes” as a query. No, it doesn’t know what plasmacytes are.

Round 9 AI vs. Stock

I should just call this Midjourney vs. Stock. Regardless, Midjourney didn’t quite hit the mark. Plasmacytes have an inordinately large number of ways to refer to them (plasma cells, B lymphocytes, B cells, etc.) and it did eventually get the idea, but it never looked quite right and never got the antibodies right, either. It did get the concept of a cell releasing something, but those things look nothing like antibodies.

I found some options on iStock and Science Photo Library (shown below, respectively) almost immediately, and the SPL option is reasonably priced if you don’t need it in extremely high resolution, so my call for Midjourney has not panned out.

Stock sites get this round. AI 1, Stock 6, Tie 2.

Round 10: An illustration of DNA methylation.

This is fairly specific, so I don’t have high hopes for AI here. The main question in my mind is whether stock sites will have illustrations of methylation specifically. Let’s find out!

Round 10 AI Fails

I occasionally feel like I have to fight with Midjourney to not be so artistic all the time, but adding things like “realistic looking” or “scientific illustration of” didn’t exactly help.

Midjourney also really wanted DNA to be a triple helix. Or maybe a 2.5-helix?

I set the bar extremely low for Stable Diffusion and just tried to get it to draw me DNA. Doesn’t matter what style, doesn’t need anything fancy, just plain old DNA. It almost did! Once. (Top left below.) But in the process it also created a bunch of abstract mayhem (bottom row below).

With anything involving “methylation” in the query, DALL-E did that thing where it tries to replace accurate representation with what it thinks are words. I therefore tried to just give it visual instructions, but that proved far too complex.

Round 10 AI vs. Stock

I have to admit, I did not think that it was going to be this hard to get reasonably accurate representations of regular DNA out of Midjourney. It did produce some, but not many, and the best looked like it was made by Jacob the Jeweler. If methyl groups look like rhinestones, 10/10. Dall-E did produce some things that look like DNA stock images circa 2010. All of these have the correct helix orientation as well: right handed. That was a must.

iStock, Getty Images, and Science Photo Library all had multiple options for images to represent methylation. Here are one from each, shown in the aforementioned order:

The point again goes to stock sites.

Final Score: AI 1, Stock 7, Tie 2.

Conclusion / Closing Thoughts

Much like generative text AI, generative image AI shows a lot of promise, but doesn’t yet have the specificity and accuracy needed to be broadly useful. It has a way to go before it can reliably replace stock photos and illustrations of laboratory and life science concepts for marketing purposes. However, for concepts which are fairly broad or in cases where getting the idea across is sufficient, AI can sometimes act as a replacement for basic stock imagery. As for me, if I get a good feeling that AI could do the job and I’m not enthusiastic about the images I’m finding from lower-cost stock sites, I’ll most likely give Midjourney a go. Sixty dollars a month gets us functionally infinite attempts, so the value here is pretty good. If we get a handful of stock images out of it each month, that’s fine – and there’s some from this experiment we’ll certainly be keeping on hand!

I would not be particularly comfortable about the future if I was a stock image site, but especially for higher-quality or specialized / more specific images, AI has a long ways to go before it can replace them.

"Want your products or brand to shine even more than it does in the AI mind of Midjourney? Contact BioBM and let’s have a chat!"

Google Ads Auto-Applied Recommendations Are Terrible

Unfortunately, Google has attempted to make them ubiquitous.

Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.

Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.

The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.

As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.

Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.

“Save time.” Is that what we’re doing?

Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.

A Select Few Auto-Apply Recommendations Are Worth Leaving On

Google has a total of 23 recommendation types. Of those, I always leave on:

  • Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
  • Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
  • Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.

A whole 3/24. Some others are situationally useful, however:

  • Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
  • Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.

That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.

If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.

"Even GPT-4 still gets the facts wrong a lot. Some things simply merit human expertise, and Google Ads is one of them. When advertising to scientists, you need someone who understands scientists and speaks their language. BioBM’s PhD-studded staff and deep experience in life science marketing mean we understand your customers better than any other agency – and understanding is the key to great marketing.

Why not leverage our understanding to your benefit? Contact Us."

Stop Hosting Your Own Videos

I know this isn’t going to apply to 90% of you, and to anyone who is thinking “of course – why would anyone do that?” – I apologize for taking your time. Those people who see this as obvious can stop reading. What that 90% may not know, however, is that the other 10% still think, for some terrible reason, that hosting their own videos is a good idea. So, allow me to state conclusively:

Hosting your own videos is always a terrible decision. Let’s elaborate.

Reasons Why Hosting Your Own Videos Is A Terrible Decision:

  1. Your audience is not patient. If you think they’re going to wait through more than one or two (if you’re lucky) periods of buffering, you’re wrong. Videos are expensive to produce. If you’re putting in the resources to make a video, chances are you want as much of your audience as possible to see it. Buffering will ensure they don’t.
  2. Your servers are not built for this. Your website is most likely hosted on a server which is designed to serve up webpages. Streaming video content is probably not your host’s cup of tea. In fact, they’d probably rather you not do it (or tell you to buy a super-expensive hosting plan to accommodate the bandwidth requirements of streaming video).
  3. Your video compression is probably terrible. Your video editing software certainly will export your video into a compressed file. “Compressed,” in this sense, means not the giant, unwieldy raw data file that you would otherwise have. It does not mean “small enough to stream effectively.” You know whose video compression is next-level from anything else you’re going to find? YouTube, Vimeo, or probably most other major services that stream video on the internet as a business.
  4. There are companies that do this professionally. When I was in undergrad and majoring in chemical engineering, the other majors jokingly referred to us as “glorified plumbers,” but I don’t touch pipes. I don’t know the first thing about plumbing. So what do I do when I get a leak? I call a plumber, because they’ll definitely solve the problem far better than I would. Likewise, if you want to host video, why not get a professional video hosting service? There’s plenty of them out there, including some that are both very reputable and inexpensive.

An Example

I’m at my office on a reasonably fast internet connection. It’s cable, not fiber optic, but it’s also 11:30 in the morning – not prime “Netflix and chill” time when the intertubes are clogged up with people binge watching a full season of House of Cards. Just to show you that any bandwidth problems aren’t on my end, I did an Ookla Speedtest:

The internet is fast.

239 Mbps. Not tech school campus internet kind of fast, but more than fast enough to stream multiple YouTube videos at 4k if I wanted to.

And now for the example… I’m not going to tell you whose video this is, but they have an ~1 minute long video to show how easy their product is to use. Luckily for me, they don’t have a lot of branding on it so I can use them as an example without shaming them. The below screenshots are where the video stopped to buffer. Note that the video was not fullscreened and was about 1068 x 600. You can click the images to see them full size and see the progress bar and time at the bottom.

Made it 18 seconds! Off to a slightly less than disastrous start…

28 seconds. Getting there…

Well that didn’t go far. 32 seconds.

37 seconds. There’s no way I’d still be watching this if I wasn’t doing this for the purposes of demonstration.

42 seconds…

51 seconds! Almost there!

“Done” … or not quite done. 56 seconds. I don’t even know why it stopped to buffer here as almost the entire rest of the video was already downloaded.

The video stopped playing 7 times in the span of 64 seconds.

What To Do Instead

Perhaps the most well-known paid video hosting service, Vimeo has a pro subscription that will allow you to embed ad-free videos without their branding on it for $20 / month. There’s a bunch of other, similar services out there as well. Or, if you don’t want to spend anything and don’t mind the possibility of an ad being shown prior to your video, you can just embed YouTube videos. The recommended videos which show after playback can be easily turned off in the embed options. You can even turn off the video title and player controls if you don’t want your audience to be able to click through to YouTube or see the bar at the bottom (although the latter also makes them unable to navigate through your video).

Basically, if you want your videos to actually get watched, do anything other than hosting them yourself.

P.S. – If you’ve read all this and still think hosting your own videos is the correct solution, which it’s not, here’s a tip: upload them to YouTube, then download them using a tool like ClipConverter. This way you’ll at least get the benefit of YouTube’s video compression, which is probably the best in the world.

"Want marketing communications that truly captivate and engage your customers? It’s time to contact BioBM. Our life science marketing experts are here to help innovative companies better reach, influence, and convert scientists."

Are You Providing Self-Service Journeys?

Customers are owning more of their own decisions.

We’ve all heard the data on how customers are delaying contact with salespeople and owning more of their own decision journeys. Recent research from Forrester predicts that the share of B2B sales, by dollar value, conducted via e-commerce will increase by about a third from 2015 to 2020: from 9.3% to 12.1%. Why does Forrester see this number growing at such a rate? Primarily due to “channel-shifting B2B buyers” – people that are willfully conducting purchases entirely online rather than going through a manned sales channel.

All this adds up to more control of the journey residing with the customers themselves and less opportunities for salespeople to influence them. Your marketing needs to accommodate these control-desiring customers. It needs to accommodate as much of the buying journey as it can, and in many instances it can and should accommodate the entire buying journey – digitally.

Scientist considering an online purchase

Accommodating Digital Buying Journeys

Planning for the enablement of self-service journeys is a complex, multi-step process. In brief, it consists of:

  1. Understanding the relevant customer personas. Defining customer personas is always a somewhat ambiguous task, but my advice to those doing it is always not to over-define them. It’s easy to achieve so much granularity that the process of defining a customer persona becomes meaningless due to the presence of far too many personas with far too little to distinguish their journeys in a practical sense. It’s okay to paint with a broad brush. For a relatively small industry such as ours, factors such as “level of influence on the purchasing decision” and “familiarity with the technology” are far better than the commonly used definitions of B2C demographics which you’ll likely see used if you look up examples of creating customer personas. It probably doesn’t much matter if the scientist you’re defining is a millennial or Gen X-er, nor do you likely need to account for the difference between scientists and senior scientists. That’s not what’s important. Focus on the critical factors, and clear your mind of everything else.
  2. Mapping the journey for each persona. This can be done with data analytics, market research, and / or simply as a good old-fashioned thought experiment, depending on your resources and capabilities as well as how accurate you need to be. If you’re using data, use the customers who converted as examples and trace their buying journeys from the beginning (which will probably have online and offline components). Bin them each into the appropriate persona then use them to inform what the journey requires for each persona. The market research approach is fairly straightforward and can be done with any combination of interviews, focus groups, and user testing approaches. If you’re on a budget and just want to sit down and brainstorm out the decision journey, start with each “raw” customer persona, then ask “where does this person want to go next in his decision journey?” A scientist may want more information, they may desire a certain experience, etc. Continue asking that question until you get to the point of purchase.
  3. Mapping information or experiences to each step of the journey. Once you know the layout of the journeys and the goals at each step, it should be relatively clear what you need to provide the customer at each step to get them to move forward in their journey. This step is really just asking: “How will we address their needs at each discrete step of their journey?”
  4. Determine the most appropriate channel for the delivery of each experience. You now know what you’re going to deliver to each customer at each point in the decision journey to keep them moving forward, but how you deliver it is important as well. On paper, it might seem as though you can simply provide all the information and experiences the customer needs in one sitting and then that’s all they will need to complete their decision journey. In practice, it often doesn’t work that way. Decisions often involve multiple stakeholders and often take place over the course of days, weeks, or months. Few B2B life science purchasing decisions are conducted on impulse. For young or less familiar brands you may also need time for the scientist to develop sufficient familiarity with the brand in order to be comfortable purchasing from you. This is the time where you must consider not only the structure of the buying journey, but the somewhat less tangible elements of its progression. Structured correctly, your roadmap should essentially remove steps from the buying journey for the customer.
  5. Implement it! You now know what the scientists’ decision journeys look like and exactly how you’ll address them. Bring that knowledge into the real world and create a holistic digital experience that enables completion of the self-serve buying journey!
  6. That’s it! Your marketing is now ready for today’s (and tomorrow’s) digitally-inclined buyers.

    Owning the JourneyNetwork internet brain head

    What we’ve outlined above will create a digital experience that allows customers to complete a purchasing decision on their own terms, which is something they increasingly want to do. If you build such an experience you will give yourself a definite advantage, but your customers will still shop around. It’s not enough to get them to hone in solely on your brand (which, if we’re being honest, is an incredibly difficult task).

    Digital marketing is not only capable of enabling your scientist-customers to complete their decision journeys on their own, however. It is possible to create a digital experience that owns a hugely disproportionate share of the decision journey to provide outsized influence upon it. Such mechanisms are called decision engines, and when properly implemented they provide their creators with massive influence on their markets. If you would like to learn more about decision engines, check out this recent podcast we did on the topic with Life Science Marketing Radio or download our report on the topic.

    "Is your life science brand adopting to the changing nature of scientists’ buying journeys? If you’re not well on your way to completing your marketing’s digital transformation, then it’s probably time to call BioBM. Not only do we have the digital skill set to develop transformational capabilities for our life science clients, but we stay one step ahead with our strategies. We live in an age of constant change, and we work to ensure that our clients aren’t simply following today’s best practices, but are positioned to be the leaders of tomorrow. We’ll provide you with the next generation of marketing strategies, which will not only elevate your products and services, but turn your marketing program into a strategic advantage. So what are you waiting for?"

Carlton Hoyt Discusses Decision Engines on Life Science Marketing Radio

Principal Consultant Carlton Hoyt recently sat down with Chris Conner for the Life Science Marketing Radio podcast to talk about decision engines, how they are transforming purchasing decisions, and what the implications are for life science marketers. The recording and transcript are below.

Transcript

CHRIS: Hello and welcome back. Thank you so much for joining us again today. Today we’re going to talk about decision engines. These are a way to help ease your customer’s buying process when there are multiple options to consider. So we’re going to talk about why that’s important and the considerations around deploying them. So if you offer lots and lots of products and customers have choices to make about the right ones, you don’t want to miss this episode.
(more…)

Personalization Can Backfire

Marketers are used to seeing a lot of data showing that improving personalization leads to improved demand generation. The more you tailor your message to the customer, the more relevant that message will be and the more likely the customer will choose your solution. Sounds reasonable, right?

In most cases personalization is great, but what those aforementioned studies and all the “10,000-foot view” data misses is that there are a subset of customers for whom personalization doesn’t help. There are times when personalization can actually hurt you.

When Personalization Backfires

Stressing the points which are most important to an individual works great … when that individual has sole responsibility for the purchasing decision. For large or complex purchases, however, that is often not the case. When different individuals involved in a purchasing decision have different priorities and are receiving different messages tailored to their individual needs, personalization can act as a catalyst for divergence within the group, leading different members to reinforce their own needs and prevent consensus-building.

Marketers are poor at addressing the problems in group purchasing. A CEB study of 5000 B2B purchasers found that the likelihood of any purchase being made decreases dramatically as the size of the group making the decision increases; from an 81% likelihood of purchase for an individual, to just 31% for a group of six.

For group purchases, marketers need to focus less on personalization and more on creating consensus.

Building Consensus for Group Purchases

Personalization reinforces each individual’s perspective. In order to more effectively sell to groups, marketers need to reinforce shared perspectives of the problem and the solution. Highlight areas of common agreement. Use common language. Develop learning experiences which are relevant to the entire group and can be shared among them.

Personalization focuses on convincing individuals that your solution is the best. In order to better build consensus, equip individuals with the tools and information they need to provide perspective about the problem to their group. While most marketers spend their time pushing their solution, the CEB found that the sticking point in most groups is agreeing upon the nature of the solution that should be sought. By providing individuals within the groups who may favor your solution with the ability to frame the nature of the problem to others in their group, you’ll help those who have a nascent desire to advocate for you advocates get past this sticking point and guide the group to be receptive of your type of solution. Having helped them clear that critical barrier, you’ll be better positioned for the fight against solely your direct competitors.

Winning a sale requires more than just understanding the individual. We’ve been trained to believe that personalization is universally good, but that doesn’t align with reality. For group decisions, ensure your marketing isn’t reinforcing the individual, but rather building consensus within the group. Only then can you be reliably successful at not only overcoming competing companies, but overcoming the greatest alternative of all: a decision not to purchase anything.

"Looking to improve how you communicate with your market? There are only so many minutes in the day and effective communications must first successfully fight for those minutes, then deliver a message that resonates. The power to captivate is what will bring you a greater share of attention, and you can only win the customers who are paying attention to you. BioBM is here to help you win – at every step. We ensure that you win market share through winning and maintaining another important share: share of attention. The days of marketing by interruption are fading away. The days of marketing by captivation have arrived. These days can be yours. Seize them."

Increasing Customer Affinity

Affinity has a transformational value on brands.

Google, Facebook, Apple and Amazon have all moved beyond having a simple transactional relationship with their customers to one that creates intimacy and serves their needs in a more holistic manner. These companies are generous, they are unselfish, and their approach is well beyond one of asking for the next sale. Whereas most companies self-promote in order to obtain the customer’s next purchase, elite brands seek not only to create customer loyalty, but to be loyal to their customers.

The overwhelming majority of companies are only good at fostering transactional affiliations with customers. They ask for their business, the customer gives it to them, and that is largely the end of the relationship. Companies frequently try to obtain repeat business; those who do so well attract supporters – customers who have moved beyond individual transactions and consciously prefer your brand, buying repeatedly. Relatively few companies are effective at recruiting promoters, people who actively share their positive impression of your brand through advocacy to others. Those brands which have strong networks of promoters are often very successful, but there is a fourth level of customer affinity that not only drives even further degrees of loyalty, but also leverages customer assets to build brand value even further, creating a positive feedback loop for both the brand and customers: co-creation.

Co-creators actively add value to the brand by contributing to its offerings for other customers. They are so invested in the brand that they add to it themselves. This may be altruistic, but may also be to realize some kind of return, be it financial, recognition, or otherwise.

Increasing Affinity

Most companies pay careful attention to how loyal their customers are to them, measuring things like net promoter score and tracking sentiment on social media. They think that good customer service will win the loyalty of customers, and while good customer experiences may turn transactors into supporters and perhaps even the occasional promoter, good service is not enough to routinely transform customers’ affinity to the highest levels. In order to move up the affinity ladder, brands need to not only focus on how loyal their customers are, but how loyal the brand is to their customers. If a customer is anything more than a transactor, they are giving you more than money. Likewise, you need to be doing something more than selling products and services (in other words, creating transactions) to better foster that affinity. You need to actively add value to the lives of your customers outside of the transactional realm.

Building co-creation opportunities often, but not always, requires a degree of altruism. You must seek to provide opportunities for your target market which do not actually cost them anything.

Examples of Co-Creation

Many businesses are built entirely around co-creation. Yelp or any user-driven recommendation website are almost entirely based on co-creation. Facebook is driven by co-creation. Airbnb is a co-creative endeavor, relying on its hosts to build the success of their platform. Your business, however, does not need to be centered on a co-creation business model in order to leverage it for increased customer affinity.

Customer-centric resources are tools that any company can use to greatly heighten customer affinity. By helping customers solve problems outside the context of a buying journey, you will provide massively positive experiences that will increase affinity. While resources do not require a co-creation component, such a component may be integrated into them. Consider the Nike+ ecosystem, where users can share workouts, compare progress with friends, and help motivate each other. The GoPro Channel is another well-known co-creation resource, where GoPro leverages its own popularity to support its customers’ best creations.

Social Media, “Engagement” and the Affinity Failure

Many marketers consider themselves to have succeeded at forging relationships with customers if they have high “engagement” metrics or large social followings. These are not indicators of affinity and are often vanity metrics. A social follow is by no means an indication of support, and it certainly does not suggest that the follower will promote your brand. In the life sciences and most B2B industries, social media is largely a platform for the dissemination of content. It is a utilitarian tool. While the ability to foster personal relationships with members of your target audience certainly exists, social media is not a natural channel for brand-customer communication. If your goals are to increase your audience size and reach, seek new social followers. If your goals are to increase customer affinity, look for non-transactional ways to provide value to your audience.

As customers not only take greater control of their purchasing decision journeys but compress them as well, brand affinity becomes increasingly important. Those brands which are able to create heightened levels of customer affinity will have immense advantage in an accelerated journey which reduces the consideration and evaluation phases. Customers are increasingly making decisions based on established preferences. The brands with the greatest customer affinity will be the winners.

"Looking for ways to increase customer affinity? BioBM develops resources for life science brands that grow their audiences and enable them to dominate their brand space. If domination is on your brand’s agenda, then contact BioBM today."

The End Is Not Nigh (now let’s get serious…)

People love to decry the end of marketing. It’s a good attention-getter. While those who shout about the coming of the end of marketing from their soapboxes are usually guilty of lacking realism or using poor logic, they do make us think about the future and that can be a learning experience. Let’s take an example…

Knowledge @ Wharton recently published an interesting, albeit narrow-sighted and overly apocalyptic article about the end of marketing and what, according to the author, will be the very narrow opportunities to engage audiences that will remain in the future. The author does a very good job of identifying trends but a very bad job of predicting what the future will likely look like, but both the good and the bad provide important lessons and highlight valuable opportunities.

First, the trends. No reason to discuss these much because most should be more or less obvious to anyone reading this.

  1. People would rather listen to other people than brands.
  2. People are going to greater lengths to avoid the onslaught of advertisement.
  3. Marketing technology “cannot truly understand the complexities of consumer intent” and therefore hitting the trifecta of the right message on the right channel at the right time is exceedingly difficult. (This I would actually say is up for debate. It’s a gray area. A discussion for another time, perhaps…)
  4. Marketers are overwhelming digital channels, further driving users to avoid marketing out of simple necessity. See point #2.

And here are the author’s four corresponding points of how he envisions the future:

  1. “As consumers bypass media with greater ease, the social feed is the wormhole to the entire online experience.”
  2. “As consumers outcompete marketers for each other’s attention, every piece of media contained in the feed is not only shareable, but shoppable.” – basically, he’s arguing that social channels become capable of performing transactions.
  3. “As the individual controls the marketing experience, communication shifts from public to semi-private.” In other words, people move from things like Facebook to things like Snapchat, where there are fewer ads and more privacy.
  4. Only two types of marketing will remain: discounts / sales and transparent sponsored content.

These predictions amount to a wild fantasy.

The most obvious flaw in the author’s reasoning is that somehow a completely shoppable social media ecosystem would evade the rules that everyone else has to play by – namely that when marketing becomes overwhelming, the audience will block it out or leave. This also ignores the plain fact that the large majority of the things that people buy are not found organically via social media. There is no shortage of people who shop. Decisions may be influenced in the social sphere, and perhaps some impulse decisions both begin and end there, but those are the exception; the overwhelming majority of purchasing decisions do not occur entirely within the social sphere and that would not change if social channels were empowered with transactability.

The real world contains a great deal of equilibrium. The ability to target people and their ability to tune it out is a balancing act. It is a cat and mouse game. Technology works both ways, and as new channels and technologies are born there become more ways to reach customers. However, as channels are flooded, the impact of each individual effort diminishes. Marketing self-regulates by decreasing its own ROI as utilization of any particular channel increases.

So What Will the Future of Marketing Look Like?

There are definitely many channels that will continue their trend towards ineffectiveness. It’s increasingly likely that audiences, fed up with maddening digital display advertising techniques, continue to adopt ad blocking technology and erode the potential of that channel. Email, while still rated as a high-ROI channel, is looking like it may have a perilous future as email service providers become better at filtering out promotions. Social media will certainly take on a larger share of permission-based marketing, but it will remain a risky business to rely too much on “rented” audiences. Increasing utilization of content marketing will continue to add noise and, in turn, increase its own cost by requiring better and better content to obtain the inherently limited resource it seeks to obtain: the audience’s attention. Increased use of social media may, if adoption increases as we project, fall victim to a similar effect, limiting brands’ ability to market effectively using social channels.

Not all developments will be bad. A decline in interruption tactics will lead to a fundamental shift in how marketing is viewed from a tool to generate demand to a mechanism to deliver value to audiences and a source of strategic advantage. Customer-centric resources and other owned platforms will proliferate as companies seek new ways to deliver value to customers while increasing the affinity level between customer and brand. These companies with strong brand affinities will create sustainable advantage for themselves as they shortcut and compress the customer decision journeys. Additionally, new and yet unknown channels will develop, and at increasingly rapid pace. Consider that until about 20 years ago, no digital channels existed at all. Accelerating technology development will continue this trend and also enable more personalized, coordinated, and targeted marketing in a manner which is more accessible and usable by companies of all sizes, budgets and capabilities.

I’m not going to try to pinpoint detailed specifics – I’m not claiming to be a psychic and it would be a waste of your time to read simple conjecture – but there are things that we can be fairly certain of given current trends, a bit of logic, and a hint of foresight. Marketing isn’t going anywhere, and while in the future it may not look quite like it does today, it will still be something that Philip Kotler would distinctly recognize.

"Marketing is a race, but unlike the 200 meter sprint there aren’t any referees that will call you for a false start. Get a jump on your competition, charge forward on the path to market domination, and start leveraging the next generation of marketing strategies today. Work with BioBM."

Remarketing by the Numbers

We recently cited some newly released findings from the Boston Consulting Group (BCG) stating that “display retargeting from paid search ads can deliver a 40 percent reduction in CPA.” It was met with some hesitation from Mariano Guzmán of Laboratorios Conda, who stated:

“[…] when I have clicked on a [life science website] what I have experienced is a tremendous amount of retargeting for 1 month that I have not liked at all as an internet user, and I do not feel my clients would as well”

Being me, I like to answer questions with facts as much as possible, so I dug some up. This one’s for you, Mariano!

To directly address Mariano’s concern, I found some studies on people’s opinions on retargeting. A 2012 Pew Research Study found that 68% of people are “not okay with it” due to behavior tracking while 28% are “okay with it” because of more relevant ads and information (4% had no opinion). I’m a little skeptical of the Pew study because they were priming the audience with reasons to “be okay” or “not be okay” with remarketing. In a sense, these people are choosing between behavior tracking + more relevant ads vs. no behavior tracking + less relevant ads. However, when users actually see the ads the ads don’t say to the viewer “by the way, we’re tracking your behavior.” Are some users aware of this? Certainly. Might some think it consciously? On occasion, sure, but nowhere near 100% of the time. However, 100% of the Pew study respondents were aware of it.

A slightly more recent 2013 study commissioned by Androit Digital and performed by Toluna asked the qusestion in a much more neutral manner (see page three of the linked-to study). They found that 30% have a positive impression about a brand for which they see retargeting ads, only 11% have a negative impression, and 59% have a neutral impression.

The Pew study and the Androit Digital study did agree on one thing – remarketing ads get noticed. In both, almost 60% of respondents noticed ads that were related to previous sites visited or products viewed.

Now to the undeniably positive side… The gains a company stands to make from remarketing.

In addition to the 40% reduction in cost per action cited in the aforementioned BCG study, a 2014 report from BCG entitled “Adding Data, Boosting Impact: Improving Engagement and Performance in Digital Advertising” found that retargeting improves overall CPC by 10%.

A 2010 comScore study evaluated the change in branded search queries for different types of digital advertising and found retargeting had provided the largest increase: 1046%.

In a 2011 Wall Street Journal article, Sucharita Mulpuru, an analyst at Forrester Research, stated that retail conversion rates are 3% on PCs and 4% to 5% on tablets. According to the National Retail Federation, 8% of customers will return to make a purchase on their own. Retargeting increases that number more than three-fold, to 26%.

There are many more studies that sing the praises of remarketing, however I wanted to stay away from case studies that investigate only single companies as well as data collected and presented by advertising service providers.

Here are my thoughts on the matter: Do some customers view retargeting unfavorably? Certainly, but that’s the nature of advertising. No matter what form it takes, some people will object to it. Considering that there is nothing ethically wrong with retargeting, we can’t give up on something that is proven to be a highly effective tactic because some people have an objection to it. In the end, it’s our job as marketers to help create success for the organizations we serve.

Marketing of Life Science Tools & Services