Digital publishing news and views from the Firsty Group


Commentary Big Data

Published by FirstyNews

Big data, big opportunities

The data at publishers’ disposal is snowballing. That creates headaches—but some exciting prospects too

This is an age of abundance of data. Before digital technology arrived in publishing, the statistical information kept by most organisations was fairly rudimentary. Publishers knew little about how, where, when and why their books were selling, and retailers had few insights into the habits of their customers. Decisions were frequently based on experience and hunches—educated guesses that often proved correct, but that carried a lot of risk.

It is all much more scientific now. Nielsen BookScan data has given publishers a torrent of insights into their sales, while retailers have developed enormous databases to help them get closer to buyers. These statistics and more—from internet searches, social media, buying histories, reader reviews and many more avenues—help to answer questions that would once have left publishers grasping at straws.

They have snowballed into the new buzzword for this and other industries—‘Big Data’. It was a hot topic at the Frankfurt Book Fair, albeit one that has so far been largely talked about rather than acted on. Even its definition—around huge volumes of unstructured data that go beyond the realms of spreadsheets and databases—seems a bit vague. But unlocking the secrets of big data might one day change the face of publishing.

Those who exploit it tend to be evangelical about it. Viktor Mayer-Schonberger, author of the international hit book Big Data, has compared it to the age of enlightenment—as nothing less than a new way of understanding the world. US analyst Clay Shirky put it nicely when he said in a review of Mayer-Schonberger’s book: “Just as water is wet in a way that individual water molecules aren’t, big data can reveal information in a way that individual bits of data can’t.”

Others have perceived it as an unnecessary headache though—or even as a danger. Publishing is a creative industry, and there are many in it who do not like the way numbers are taking over from words. Decisions about what and how to publish should be made by humans rather than computers, they argue. It is certainly true that many bestsellers each year could never have been predicted by simple algorithms.

And even if you do believe in big data, it doesn’t necessarily follow that it will make all the right decisions for you. The weight of data out there is enormous, and it can easily feel suffocating. Big data needs to be handled with a light touch and an inquisitive mind, and it is only as good as the people who analyse it. There is no point in piling up data if all you do is sit on it. Or as HarperCollins’ chief digital officer Chantal Restivo-Alessi memorably put it at Digital Book World earlier this year: “Big data is a little bit like teenagers talking about how many girlfriends they have.”

The real skill in big data is in putting all the pieces together—correlating figures rather than looking at different ones in isolation. In publishing, that means joining the dots between sales figures, social media activity, Google analytics, retailer feedback and many more things. Gathering data is the easy part—crunching it is the real challenge. Doing so requires skills that have not yet been prevalent in publishing. Smart operators have realised this, and are making the recruitment of information scientists and analysts as important as good editors or sales people.

Within books, the company showing the way on big data is Amazon, which takes unimaginably vast piles of information about its customers and distils them down into insights like tailored recommendations. Publishers can never hope to reach that scale, but as more and more of them sell direct to consumers, they will at least be able to deepen and strengthen their data. That will transform their knowledge of business and publishing trends and gaps in the market, and help them understand their customers like never before. D2C is a great way to get extra sales—but in the long run, the data it gives them ownership of might prove to be even more valuable.

A common misconception in all this is that it is just for big publishers. It is certainly true that conglomerates can pour in the most resources—into the complex systems they use to gather data and the experts they hire to make sense of it. But small and medium sized companies can make smart use of it too—and in fact might be better placed to do so. For one thing, the data at their disposal is likely to be more negotiable than the avalanche of it at bigger companies—and for another, it might well be better quality. Small publishers often operate in niches in which they already know lots about their readers. Expanding on the data they have will bring them even closer.

This might not be what experts would technically term ‘big data’, and perhaps a more accurate way of describing it would be simply smart use of statistics. But the distinction hardly matters. What is abundantly clear now is that all publishers need to be in charge of their data. Even its ardent believers recognise that it can only ever be one part of publishers’ armoury—alongside their accumulated understanding of books, business and their customers. Data is useless without instinct—but equally, instinct is increasingly useless without data. As Mayer-Schonberger says: “Those who believe only one or the other works will fail.”

Tags: , , , , , , , ,




Comments are closed.

Back to Top ↑