Earlier this month, web developers Nublue published this article, which purports to test various copywriting ‘resources’. The survey pits a freelance copywriter, Craig Wright, against four content mills: Copify, Text Broker, Content Brokers and Odesk.
Craig, who trades as Straygoat Copywriting, was named in the first version of the article. At his request, it was edited to remove his name, but he ‘outed’ himself anyway by referring to the post on Twitter.
Conflict of interest
Several comments on the blog note the existing relationship between Nublue and content mill Copify, which comes top in the test.
The facts are as follows.
- Copify was co-founded by a former Nublue employee
- Copify operates from the same building as Nublue, and staff from the two firms know each other personally (confirmed in comments on the post)
- Copify is a supplier to Nublue
- Copify was nominated by Nublue for a Mashable award.
I’ll leave it to you to decide whether this constitutes a conflict of interest for Nublue, and whether the content delivered by Copify for the test was likely to be representative of its usual standard. (Craig was made aware that his copy was being used for a test, so it’s reasonable to assume that Copify also knew.)
Shortly after the Nublue article appeared, I noted the points above on Twitter and received this reply from Copify:
As a rebuttal of the bias charge, it seemed rather thin; as a challenge, it seemed irrelevant. The real question is not whether I’m better than Copify’s writers, but whether Craig Wright is – despite what the Nublue test says. And that’s the focus of this article.
Scope of the test
Nublue asked for a 300-word blog post reviewing the beta version of Internet Explorer 9.
It rated the content providers in five areas, awarding scores out of ten in each category to arrive at a total score out of 50:
- ease of placing order
- speed of service
- usability of website
- quality of the finished article.
Copify and Text Broker came first and second with 49/50 and 46/50 respectively, with Craig in third place on 31/50. The other two content mills scored 9/50 and 4/50 because of problems with their service that don’t concern us here.
Nublue’s assessment method, while perhaps even-handed prima facie, is actually highly questionable – because the choice of categories is arbitrary, and the weightings given to them are unbalanced.
For a content mill, ‘usability of website’ is key; for a freelance copywriter, it’s marginally relevant at best. A freelancer’s site is a portfolio with an email address attached; it has no real functionality.
Similar observations could be made on ‘ease of placing order’ and ‘speed of service’. A freelancer is unlikely to offer instant online ordering, because they’d want to discuss the brief before submitting a price. Similarly, an in-demand freelancer might not be able to turn a commission round in 24 hours, but their clients are willing to wait in order to get the writer they want.
Including these three categories inevitably biases the results towards content mills; in effect, the categories imply a preference for a Copify-style content service before we even get to the scoring. A fairer test would also reflect the strengths that freelance copywriters offer: listening, developing the brief, making helpful suggestions, responding to feedback, developing an understanding of the client’s business and so on.
Even if we give Nublue’s choice of categories the benefit of the doubt, there’s still a major problem with the weightings they’ve been given. Each category is evenly weighted (ten points), but four of the five categories focus on process rather than product, making the weighting given to quality far too low.
For example, are ‘usability of website’ and ‘ease of placing order’ really just as important as ‘quality of the finished article’? And is the way copy is ordered and delivered really three times as important as the actual quality of that copy, as the combined weighting of ‘usability of website’, ‘ease of placing order’ and ‘speed of service’ (30 points) implies?
In this model, a content provider could potentially deliver completely unusable content but still score 40/50 because they did it quickly, cheaply and efficiently. This seems like a case of ‘the operation was successful, but the patient died’.
Alternative assessment method
We could use a scoring method that gives equal weight to cost, quality and service. While this is also arbitrary, it does reflect the methods used elsewhere, for example at FreeIndex (customers score providers out of five for value for money, service and quality).
However, I would argue that, in copywriting specifically, quality is far more important than service and price. I’ll develop my argument more fully in a future post, but for now I will simply observe that the commercial and economic value delivered by (for example) a blog post depends on its power to generate social-media interest, attract backlinks and build authorial reputation – all of which are directly related to its quality.
Therefore, I propose a higher weighting of 30 points for quality, with cost and service together accounting for a further 20 points, to give this model:
- Cost: Score out of 10, as now.
- Service: Score out of 10, based on 3.3 points each for ease of placing order, speed of service and usability of website and calculated by adding Nublue’s scores in each area together and dividing by 3.
- Quality: Score out of 30 (calculated by multiplying Nublue’s score by 3)
As before, this gives a total score out of 50.
The table below shows how the three medal-winners would score under this new system, using Nublue’s own scores.
|Copify||Text Broker||Craig Wright|
Assessment of quality
When I caught up with Craig, I found him understandably ‘angry’ at the results of the test. While he’s realistic about the way his costs and service stack up against the content mills, he takes issue with the evaluation of the end product.
‘I don’t care about being seen as expensive or slower,’ he said. ‘But what I don’t agree with is that the end result is of the same quality. As I stated in my comments on the blog, there is no way that Copify article can be seen as an objective review. It may as well have been written by Microsoft’s PR department.’
The Nublue brief is very clear: it asks for a review of IE9, not just coverage. The text of the brief includes these words (my italics):
This article will be aimed at Internet professionals and webmasters that may already be using IE9 or thinking about trying it.
The article should review Internet Explorer 9 with a focus on its performance and functionality for web users and developers. The article should make comparisons with other browsers that are available such as Firefox that are currently seen to have stronger developer ecosystems and stands [sic] compliant features.
While the Copify piece uncritically lists the features of IE9, press-release style, Craig’s expertly probes the areas where IE9 needs to prove itself to the web-developer community. Is it really fair to give both articles the same score?
‘It would have been good if some developers looked at the articles and commented, because I know which one they would have disliked the most – the pro-IE9 [Copify] one!’ laughs Craig. ‘Considering that developers were part of the target market, you’d expect something in the Copify article to provide them with a bit of info about how it is going to affect them. Would an IT expert/web developer have considered the Copify article was well researched and thorough? I doubt it!’
A commenter on the Nublue blog echoes these sentiments:
I did find it interesting that the Copify article and the freelance writer were ranked equally. I’m not even sure that the Copify article responded to the prompt. No evaluation of performance and functionality for web developers, no substantive comparison to other browsers, especially in the context of compatibility and developer ecosystems and no real consensus on whether IE9 is a competitor (“a step in the right direction”). Moreover, it’s technically inaccurate where there is substantive analysis.
To get an impartial developer’s perspective, I asked Gareth Thompson of Codepotato for his views on the three articles, as well as his score out of 30 for each. The texts were sent in a Word document with no accompanying details, and I didn’t outline the scope and intention of this post. As far as possible, it was a ‘blind’ assessment – a copywriting ‘Pepsi challenge’.
Gareth rated the Copify article 15/30, Text Broker 19.5/30 and Craig’s piece 25.5/30. ‘I think that article B [Craig] is a better article from the technology or “capabilities” point of view, as it explains the improvements that most web developers will want to know about,” he said. ‘Article C [Text Broker] just seems to miss the mark a little. Personally, out of the three articles I would have been more inclined to bookmark/recommend article B [Craig].’
When we plug Gareth’s scores into the more balanced scoring system, here’s what we find:
|Copify||Text Broker||Craig Wright|
So there you have it – a much more balanced outcome. But I’m giving the final honours to Craig, for three reasons. Firstly, because Nublue marked him down on speed of service just because he was on holiday when they approached him. Secondly, because his article has the all-important social-media appeal that would have delivered true long-term value to the client – as Gareth confirmed in a blind test. And finally because he came top, convincingly, on quality – which should surely be the ultimate deciding factor.
I’ll leave the last word to Craig himself. ‘For an in-depth and considered view that gives your readership the answers they are looking for and adds value to your site, you’re better off with a freelance copywriter,’ he says. ‘One who takes the time to look past the press releases and investigate the real issues, concerns etc. – a process that takes more than an hour, and so costs more than £15!’
In a future post, I’ll expand on this point, explaining exactly why quality of content is so important – and why you ignore it at your peril.