TL;DR Summary
- Sites can use the HTML tag rel=ânofollowâ to instruct search engines not to credit a link with any importance for the purposes of SEO
- These instructions don’t carry authority: they are merely suggestions
- Search engines, including Google, choose whether to listen to the nofollow suggestion or not
- They generally do not listen to the suggestion
- If you can generate contextually relevant backlinks from sites which use nofollow tags, go for it! You’ll likely get value from them regardless. Just don’t be spammy.
The History of HTML Link Relationship Tags
As the name implies, a link relationship tag provides context to search engines and other automated crawlers on the nature of the relationship between the source page and the destination page. Some very common ones which marketers may run into are rel=”sponsored”, which denotes links in sponsored content, rel=”ugc” which denotes links in user-generated content, and rel=”nofollow”, which is supposed to tell search engines to completely ignore a link. There are over 100 link relations recognized by the Internet Assigned Numbers Authority, however, most of which are somewhat arcane and not used by search engines in any way which would be meaningful to marketers.
Link relationship tags, AKA rel tags, came into being in 2005, largely in response to the need for a nofollow tag to combat the excessive blog, comment, and forum spam which was extremely prevalent through the 2000s. Nofollow was proposed by Google’s Matt Cutts and Blogger’s Jason Shellen. For a long time, because they didn’t have a better option, Google and other search engines treated nofollow tags as law. Not only would they give no SEO benefit to nofollow links, but for a long time Google wouldn’t even index them.
The Evolution of Nofollow
As blog and comment spam became less of an issue, and as search engines became much more powerful and able to understand context, nofollow and similar relationship tags became less important to the search engines. Google effectively said as much in an announcement on their Search Central Blog on September 10, 2019:
When nofollow was introduced, Google would not count any link marked this way as a signal to use within our search algorithms. This has now changed. All the link attributesâsponsored, ugc, and nofollowâare treated as hints about which links to consider or exclude within Search. We’ll use these hintsâalong with other signalsâas a way to better understand how to appropriately analyze and use links within our systems.
Why not completely ignore such links, as had been the case with nofollow? Links contain valuable information that can help us improve search, such as how the words within links describe content they point at. Looking at all the links we encounter can also help us better understand unnatural linking patterns. By shifting to a hint model, we no longer lose this important information, while still allowing site owners to indicate that some links shouldn’t be given the weight of a first-party endorsement.
As stated in the post, as of March 1, 2020 Google changed the role of link relationship tags, making them suggestions (or, in Google’s words, “hints”) rather than rules.
Context Is Key
As search engines continue to become more intelligent and human-like in their understanding of context within content, life science SEO professionals need to pay greater attention to context. A nofollow backlink with just one or two sentences in a comment on a relevant Reddit post may be worth more than an entire guest post on a site with little other content relevant to your field. Focus on doing all the things which you should be doing anyway, regardless of whether the link is nofollow or not:
- Post links only in relevant places
- Contribute meaningfully to the conversation
- Don’t be spammy
- Keep your use of links to a minimum
- Write naturally and use links naturally. Don’t force it.
Case: Laboratory Supply Network
Laboratory Supply Network started a backlinking campaign with BioBM in August 2023 which relied almost entirely on backlinks in comments from highly reputable websites (including Reddit, ResearchGate, and Quora), all of which use nofollow tags on their links. At the start of the campaign, their key rank statistics were:
- Average rank: 26.08
- Median rank: 14
- % of terms in the top 10: 45.00% (63 out of 140)
- % of terms in the top 3: 21.43% (30 out of 140)
Less than 8 months later, in March 2024, we had improve their search rank statistics massively:
- Average rank: 17.54
- Median rank: 7
- % of terms in the top 10: 61.11% (88 out of 144)
- % of terms in the top 3: 39.58% (57 out of 144)
Backlinking was not the only thing that Laboratory Supply Network was doing to improve its SEO – it has a longstanding and relatively consistent content generation program, for instance – but the big difference before and after was the backlink campaign (which, again, relied almost entirely on nofollow backlinks!) In the previous year, LSN’s search statistics didn’t improve nearly as much.
Conclusions
Backlinking has long been a key component of a holistic SEO strategy, and it remains just as important as ever. Links are an important signal telling Google and other search engines what content is relevant and important with regards to any particular topic. While many highly reputable sites use rel=”nofollow” to try to discourage link spam, most link spam is more effectively dealt with in other ways, such as manual, automated, or community-driven moderation. Google knows these other moderation tools have become more effective, and therefore allows itself to treat the nofollow tag as more of a hint than a rule. If you are performing SEO for your life science company, don’t avoid sites just because they use nofollow. You can achieve good results in spite of it.
Sometimes you just have to let Google be Google.
Large, complex algorithms which pump out high volumes of decisions based in part on non-quantifiable inputs are almost inherently going to get things wrong sometimes. We see this as users of Google Search all the time: even when you provide detailed search queries, the top result might not be the best and not all of the top results might be highly relevant. It happens. We move on. That doesn’t mean the system is bad; it’s just imperfect.
Quality score in Google Ads has similar problems. It’s constantly making an incredibly high volume of decisions, and somewhere in the secret sauce of its algos it makes some questionable decisions.
Yes, Google Ads decided that a CTR of almost 50% was “below average”. This is not surprising.
If your quality score is low, there may be things you can do about it. Perhaps your ads aren’t as relevant to the search terms as they could be. Check the search terms that your ads are showing for. Does you ad copy closely align with those terms? Perhaps your landing page isn’t providing the experience Google wants. Is it quick to load? Mobile friendly? Relevant? Check PageSpeed Insights to see if there are things you can do to improve your landing page. Maybe your CTR actually isn’t all that high. Are you making good use of all the ad extensions?
But sometimes, as we see above, Google just thinks something is wrong when to our subjective, albeit professional, human experience everything seems just fine. That’s okay. Don’t worry about it. Ultimately, you shouldn’t be optimizing for quality score. It is a metric, not a KPI. You should be optimizing for things like conversions, cost per action (CPA), and return on ad spend (ROAS), all of which you should be able to optimize effectively even if your quality score seems sub-optimal.
Not all impressions are created equal.
We don’t think about run of site (ROS) ads frequently as we don’t often use them. We try to be very intentional with our targeting. However, we recently had an engagement where we were asked to design ads for a display campaign on a popular industry website. The goal of the campaign was brand awareness (also something to avoid, but that’s for another post). The client was engaging with the publisher directly. We recommended the placement, designed the ads, and provided them to the client, figuring that was a done job. The client later returned to us to ask for more ad sizes because the publisher came back to them suggesting run of site ads because the desired placement was not available.
Some background for those less familiar with display advertising
If you are familiar with placement-based display advertising, you can skip this whole section. For the relative advertising novices, I’ll explain a little about various ad placements, their nomenclature, and how ads are priced.
An ad which is much wider than it is tall is generally referred to as a billboard, leaderboard, or banner ad. These are referred to as such because their placement on webpages is often near the top, although that is far from universally true, and even where it is true they often appear lower on the page as well. In our example on the right, which is a zoomed-out screenshot of the Lab Manager website, we see a large billboard banner at the top of the website (outlined in yellow), multiple interstitial banners of various sizes (in orange) and a small footer banner (green) which was snapped to the bottom of the page while I viewed it.
An ad which is much taller than it is wide is known as a skyscraper, although ones which are particularly large and a bit thicker may be called portraits, and large ads with 1:2 aspect ratios (most commonly 300 x 600 pixels) are referred to as half page ads. Lab Manager didn’t have those when I looked.
The last category of ad sizes is the square or rectangle ads. These are ads which do not have a high aspect ratio; generally less than 2:1. We can see one of those highlighted in purple. There is also some confusing nomenclature here: a very common ad of size 300 x 250 pixels is called a medium rectangle but you’ll also sometimes see it referred to as an MPU, and no one actually knows the original meaning of that acronym. You can think of it as mid-page unit or multi-purpose unit.
As you see, there are many different placements and ad sizes and it stands to reason that all of these will perform differently! If we were paying for these on a performance basis, say with cost-per-click, the variability in performance between the different placements would be self-correcting. If I am interested in a website’s audience and I’m paying per click, then I [generally] don’t care where on the page the click is coming from. However, publishers don’t like to charge on a per-click basis! If you are a publisher, this makes a lot of sense. You think of yourself as being in the business of attracting eyeballs. Even though to some extent they are, publishers do not want to be in the business of getting people to click on ads. They simply want to publish content which attracts their target market. Furthermore, they definitely don’t want their revenues to be at the whims of the quality of ads which their advertisers post, nor do they want to have to obtain and operate complex advertising technology to optimize for cost per view (generally expressed as cost per 1000 views, or CPM) when their advertisers are bidding based on cost per click (CPC).
What are Run Of Site Ads and why should you be cautious of them?
You may have noticed that the above discussion of ad sizes didn’t mention run of site ads. That is because run of site ads are not a particular placement nor a particular size. What “run of site” means is essentially that your ad can appear anywhere on the publisher’s website. You don’t get to pick.
Think about that. If your ads can appear anywhere, then where are they appearing in reality? They are appearing in the ad inventory which no one else wanted to buy. Your ads can’t appear in the placements which were sold. They can only appear in the placements which were not sold. If your insertion order specifies run of site ads, you are getting the other advertisers’ leftovers.
That’s not to say that ROS ads are bad in all circumstances, nor that publisher-side ad salespeople who try to sell them are trying to trick you in any way. There is nothing malicious going on. In order to get value from ROS ads, you need to do your homework and negotiate accordingly.
How to get good value from ROS ads
Any worthwhile publisher will be able to provide averaged metrics for their various ad placements. If you look at their pricing and stats you may find something like this:
Ad Format | CTR | CPM |
Multi-unit ROS | 0.05% | $40 |
Billboard Banner | 0.35% | $95 |
Medium Rectangle | 0.15% | $50 |
Half Page | 0.10% | $50 |
Leaderboard | 0.10% | $45 |
One good assumption is that if people aren’t clicking the ad, it means they’re not paying attention to it. There is no other reason why people would click one ad at a much higher rate than others. Averaged out over time, we cannot assume that the ads in those positions were simply better. Likewise, there would be no logical reason why the position of an ad alone would cause a person to be less likely to click on it aside from it not getting the person’s attention in the first place. This is why billboard banners have very high clickthrough rates (CTR): it’s the first thing you see at the top of the page. Publishers like to price large ads higher than smaller ads, but it’s not always the case that the larger ads have a higher CTR.
With that assumption, take the inventory offered and convert the CPM to CPC using the CTR. The math is simple: CPC = CPM / (1000 * CTR).
Ad Format | CTR | CPM | Effective CPC |
Multi-unit ROS | 0.05% | $40 | $80 |
Billboard Banner | 0.35% | $95 | $27 |
Medium Rectangle | 0.15% | $50 | $33 |
Half Page | 0.10% | $50 | $50 |
Leaderboard | 0.10% | $45 | $45 |
Here, we see those really “cheap” run of site ads are actually the most expensive on a per click basis, and the billboard banner is the cheapest! Again, even for more nebulous goals like brand awareness, we can only assume that CTR is a proxy for audience attentiveness. Without eye tracking or mouse pointer tracking data, which publishers are highly unlikely to provide, CTR is the best attentiveness proxy we have.
With this information, you can make the case to the publisher to drop the price of their ROS ads. They might do it. They might not. Most likely, they’ll meet you somewhere in the middle. By making a metrics-driven case to them, however, you’ll be more likely to get the best deal they are willing to offer. (ProTip: If you’re not picky when your ads run, go to a few publishers with a low-ball offer a week or so until end of the month. Most publishers sell ads on a monthly basis, and if they haven’t sold all their inventory, you’ll likely be able to pick it up at a cut rate. They get $0 for any inventory they don’t sell. Just be ready to move quickly.)
The other situation in which ROS ads are useful and can be a good value are when you want to buy up all the ad inventory. Perhaps a highly relevant publisher has a highly relevant feature and that all ads up to an audience you want to saturate. You can pitch a huge buy of ROS ads which will soak up the remaining inventory for the period of time when that feature is running, and potentially get good placements at the ROS price. Just make sure you know what you’re buying and the publisher isn’t trying to sell their best placements on the side.
Lessons
- Run of site ads aren’t all bad, but novice advertisers can end up blowing a bunch of money if they’re not careful.
- Regardless of placement, always be mindful of the metrics of the ads you’re buying.
- Even if your campaign goals are more attention-oriented than action-oriented, CPC is a good proxy for attentiveness.
Unfortunately, Google has attempted to make them ubiquitous.
Google Ads has been rapidly expanding their use of auto-applied recommendations recently, to the point where it briefly became my least favorite thing until I turned almost all auto-apply recommendations off for all the Google Ads accounts which we manage.
Google Ads has a long history of thinking it’s smarter than you and failing. Left unchecked, its “optimization” strategies have the potential to drain your advertising budgets and destroy your advertising ROI. Many users of Google Ads’ product ads should be familiar with this. Product ads don’t allow you to set targeting, and instead Google chooses the targeting based on the content on the product page. That, by itself, is fine. The problem is when Google tries to maximize its ROI and looks to expand the targeting contextually. To give a practical example of this, we were managing an account advertising rotary evaporators. Rotary evaporators are very commonly used in the cannabis industry, so sometimes people would search for rotary evaporator related terms along with cannabis terms. Google “learned” that cannabis-related terms were relevant to rotary evaporators: a downward spiral which eventually led to Google showing this account’s product ads for searches such as “expensive bongs.” Most people looking for expensive bongs probably saw a rotary evaporator, didn’t know what it was but did see it was expensive, and clicked on it out of curiosity. Google took that cue as rotary evaporators being relevant for searches for “expensive bongs” and then continued to expand outwards from there. The end result was us having to continuously play negative keyword whack-a-mole to try to exclude all the increasingly irrelevant terms that Google thought were relevant to rotary evaporators because the ads were still getting clicks. Over time, this devolved into Google expanding the rotary evaporator product ads to searches for – and this is not a joke – “crack pipes”.
The moral of that story, which is not about auto-applied recommendations, is that Google does not understand complex products and services such as those in the life sciences. It likewise does not understand the complexities and nuances of individual life science businesses. It paints in broad strokes, because broad strokes are easier to code, the managers don’t care because their changes make Google money, and considering Google has something of a monopoly it has very little incentive to improve its services because almost no one is going to pull their advertising dollars from the company which has about 90% of search volume excluding China. Having had some time to see the changes which Google’s auto-apply recommendations make, you can see the implicit assumptions which got built in. Google either thinks you are selling something like pizza or legal services and largely have no clue what you’re doing, or that you have a highly developed marketing program with holistic, integrated analytics.
As an example of the damage that Google’s auto-applied recommendations can do, take a CRO we are working with. Like many CROs, they offer services across a number of different indications. They have different ad groups for different indications. After Google had auto-applied some recommendations, some of which were bidding-related, we ended up with ad groups which had over 100x difference in cost per click. In an ad group with highly specific and targeted keywords, there is no reasonable argument for how Google could possibly optimize in a way which, in the process of optimizing for conversions, it decided one ad group should have a CPC more than 100x that of another. The optimizations did not lead to more conversions, either.
Google’s “AI” ad account optimizer further decided to optimize a display ad campaign for the same client by changing bidding from manual CPC to optimizing for conversions. The campaign went from getting about 1800 clicks / week at a cost of about $30, to getting 96 clicks per week at a cost of $46. CPC went from $0.02 to $0.48! No wonder they wanted to change the bidding; they showed the ads 70x less (CTR was not materially different before / after Google’s auto-applied recommendations) and charged 24x more. Note that the targeting did not change. What Google was optimizing for was their own revenue per impression! It’s the same thing they’re doing when they decide to show rotary evaporator product ads on searches for crack pipes.
Furthermore, Google’s optimizations to the ads themselves amount to horribly generic guesswork. A common optimization is to simply include the name of the ad group or terms from pieces of the destination URL in ad copy. GPT-3 would be horrified at the illiteracy of Google Ads’ optimization “AI”.
A Select Few Auto-Apply Recommendations Are Worth Leaving On
Google has a total of 23 recommendation types. Of those, I always leave on:
- Use optimized ad rotation. There is very little opportunity for this to cause harm, and it addresses a point difficult to determine on your own: what ads will work best at what time. Just let Google figure this out. There isn’t any potential for misaligned incentives here.
- Expand your reach with Google search partners. I always have this on anyway. It’s just more traffic. Unless you’re particularly concerned about the quality of traffic from sites which aren’t google.com, there’s no reason to turn this off.
- Upgrade your conversion tracking. This allows for more nuanced conversion attribution, and is generally a good idea.
A whole 3/24. Some others are situationally useful, however:
- Add responsive search ads can be useful if you’re having problems with quality score and your ad relevance is stated as being “below average”. This will, generally, allow Google to generate new ad copy that it thinks is relevant. Be warned, Google is very bad at generating ad copy. It will frequently keyword spam without regard to context, but at least you’ll see what it wants to you to do to generate more “relevant” ads. Note that I suggest this over “improve your responsive search ads” such that Google doesn’t destroy the existing ad copy which you may have spent time and effort creating.
- Remove redundant keywords / remove non-serving keywords. Google says that these options will make your account easier to manage, and that is generally true. I usually have these off because if I have a redundant keyword it is usually for a good reason and non-serving keywords may become serving keywords occasionally if volume improves for a period of time, but if your goal is simplicity over deeper data and capturing every possible impression, then leave these on.
That’s all. I would recommend leaving the other 18 off at all times. Unless you are truly desperate and at a complete loss for ways to grow your traffic, you should never allow Google to expand your targeting. That lesson has been repeatedly learned with Product Ads over the past decade plus. Furthermore, do not let Google change your bidding. Your bidding methodology is likely a very intentional decision based on the nature of your sales cycle and your marketing and analytics infrastructure. This is not a situation where best practices are broadly applicable, but best practices are exactly what Google will try to enforce.
If you really don’t want to be bothered at all, just turn them all off. You won’t be missing much, and you’re probably saving yourself some headaches down the line. From our experience thus far, it seems that the ability of Google Ads’ optimization AI to help optimize Google Ads campaigns for life sciences companies is far lesser than its ability to create mayhem.
Why not leverage our understanding to your benefit? Contact Us."
I was reading the MarketingCharts newsletter today and saw a headline: “What Brings Website Visitors Back for More?” The data was based on a survey of 1000 people, and they found the top 4 reasons were, in order:
1) They find it valuable
2) It’s easy to use
3) There is no better alternative for the function it serves
4) They like it’s mission / vision
I thought about it for a second and had a realization – this is why people are loyal to ANYTHING! And achieving these 4 things should be precisely our goal as marketers:
1) Clearly demonstrate value
2) Make your offerings – and your marketing – accessible
3) Show why your particular thing is the best. (Hint: If it’s not the best you probably need to refine your positioning to find the market segment that it is the best for.)
4) Tell your audiences WHY. Get them to buy into it. Don’t just drone on about the what, but sell them on an idea. Captivate them with a belief!
Do those 4 things well, you win.
BTW, the MarketingCharts newsletter is a really good, easy to digest newsletter – mostly B2C focused but there’s some great stuff in there even for a B2B audience and you can get most of the key points in each day’s newsletter under a minute.
Principal Consultant Carlton Hoyt recently sat down with Chris Conner for the Life Science Marketing Radio podcast to talk about decision engines, how they are transforming purchasing decisions, and what the implications are for life science marketers. The recording and transcript are below.
Transcript
CHRIS: Hello and welcome back. Thank you so much for joining us again today. Today weâre going to talk about decision engines. These are a way to help ease your customerâs buying process when there are multiple options to consider. So weâre going to talk about why thatâs important and the considerations around deploying them. So if you offer lots and lots of products and customers have choices to make about the right ones, you donât want to miss this episode.
(more…)
Marketers are used to seeing a lot of data showing that improving personalization leads to improved demand generation. The more you tailor your message to the customer, the more relevant that message will be and the more likely the customer will choose your solution. Sounds reasonable, right?
In most cases personalization is great, but what those aforementioned studies and all the “10,000-foot view” data misses is that there are a subset of customers for whom personalization doesn’t help. There are times when personalization can actually hurt you.
When Personalization Backfires
Stressing the points which are most important to an individual works great … when that individual has sole responsibility for the purchasing decision. For large or complex purchases, however, that is often not the case. When different individuals involved in a purchasing decision have different priorities and are receiving different messages tailored to their individual needs, personalization can act as a catalyst for divergence within the group, leading different members to reinforce their own needs and prevent consensus-building.
Marketers are poor at addressing the problems in group purchasing. A CEB study of 5000 B2B purchasers found that the likelihood of any purchase being made decreases dramatically as the size of the group making the decision increases; from an 81% likelihood of purchase for an individual, to just 31% for a group of six.
For group purchases, marketers need to focus less on personalization and more on creating consensus.
Building Consensus for Group Purchases
Personalization reinforces each individual’s perspective. In order to more effectively sell to groups, marketers need to reinforce shared perspectives of the problem and the solution. Highlight areas of common agreement. Use common language. Develop learning experiences which are relevant to the entire group and can be shared among them.
Personalization focuses on convincing individuals that your solution is the best. In order to better build consensus, equip individuals with the tools and information they need to provide perspective about the problem to their group. While most marketers spend their time pushing their solution, the CEB found that the sticking point in most groups is agreeing upon the nature of the solution that should be sought. By providing individuals within the groups who may favor your solution with the ability to frame the nature of the problem to others in their group, you’ll help those who have a nascent desire to advocate for you advocates get past this sticking point and guide the group to be receptive of your type of solution. Having helped them clear that critical barrier, you’ll be better positioned for the fight against solely your direct competitors.
Winning a sale requires more than just understanding the individual. We’ve been trained to believe that personalization is universally good, but that doesn’t align with reality. For group decisions, ensure your marketing isn’t reinforcing the individual, but rather building consensus within the group. Only then can you be reliably successful at not only overcoming competing companies, but overcoming the greatest alternative of all: a decision not to purchase anything.