ILI Keynote speaker Martin Hamilton, Futurist at Jisc, spoke to CILIP’s Information Professional on a range of topics:
How did you become a futurist with Jisc?
As I kid I was an avid reader. My parents were very poor, but our town had an excellent public library that I practically moved into. Science fiction has always been a particular favourite, so it was a delight to share the bill with Charlie Stross and Ken MacLeod earlier this year at the University of Edinburgh’s Near Future Library Symposium. There’s a fascinating dichotomy around sci-fi that informs much of my work at Jisc – technological and societal trends often creep up on us unnoticed, but then with hindsight it’s perfectly obvious that this or that change was inevitable.
I’ve been lucky enough to work in advanced technology fields for most of my career – from co-writing the first commercially available web OPAC for BLCMP (now TALIS) in the 1990s to running a supercomputer centre for Midlands’ universities before joining Jisc. I’ve actually been in Jisc’s orbit for a lot of that time, working on Jisc projects and services like the eLib programme and the Janet Web Cache Service. I’ve always thought that Jisc was a great way to share the costs of national infrastructure for research and education, and I feel privileged to lead Jisc’s Future and Emerging Technologies team.
Give us your Top 5 things on the horizon that library and information professionals need to know about, and why
I’m going to be controversial and say that library and information professionals are already very well versed to deal with some of the most important issues of our time. Two stand out for me: helping people to find the information they need amid the vast amounts of data out there on the internet, and, in this age of disinformation, helping people to fact check. The great strength of the internet is that it gives everyone a voice, democratising communication and the flow of information. Unfortunately, this is also its greatest weakness, unless you know how to tell a fact from an opinion. We are starting to see just how pernicious the latter can really be, with the return of deadly yet preventable diseases like measles – which can be attributed directly to the lies spread by anti-vaxxers.
But there are also some things that we aren’t really prepared for. For me, the principal issue is that our society needs citizens to have advanced digital skills if it’s to thrive, but we have a very fragmented picture of how our kids and adult learners will acquire these. It’s a very big leap from basic digital literacy tropes like setting up a blog and managing your social media presence to more advanced topics like pivot tables in Excel. And that’s before we go near the really high-powered stuff – for instance, at a university there will be lots of people who need to learn about stats processing using R. How does an institution go about making this part of its core information literacy provision? Do our libraries need to recruit data scientists?
I also think there is something fascinating going on in our schools with the focus in England on coding as a core skill. Do all our children need to learn to code? Will being forced to learn coding put them off pursuing it as a career? And what about all the other aspects of digital capability that don’t involve coding, but are more about using particular tools effectively - from databases to operating systems, Photoshop to AutoCAD? Most of all, what happens when a generation of hackers and makers that learned to code at school goes to college or university? Our tertiary institutions tend to think in terms of coding as a specialism, but perhaps it’s time to consider how we might build on the schools’ experience. Might we one day make coding or digital literacy more broadly a core pillar of apprenticeships and undergraduate degrees?
Big Data, Blockchain and Artificial Intelligence are all in the headlines – but how do you see them being utilised in libraries and information? Do you believe they will prove to be revolutionary in the profession?
We should be really careful to differentiate between the hype and the reality around new technologies like these. There’s a couple of particular tendencies I’m seeing right now – one is to add blockchain to just about any project to make it seem all magical and sparkly, and the other is to treat just about anything done by a computer as “AI”. These are both quite pernicious trends. The truth is that there are some very interesting potential applications of blockchain, but in most cases people are essentially using blockchain as a database. We’ve had databases for a long time, and we’re quite good at them. Blockchain is like a very slow database that only has a few functions, and in many cases wastes vast amounts of energy carrying out meaningless proof-of-work calculations. We can do a lot better than this. If we get it right, then blockchain could be a really powerful way to track citations and enable code/data re-use.
AI is already all around us, but it’s not the AI we might recognise from our favourite sci-fi stories – most of the major internet services use machine learning and neural networks for things that we take for granted, like recommending users to follow, or products to buy. There are also some interesting and advanced uses of AI in day-to-day use. My favourite is the way that Google Photos automatically classifies the pictures you upload to it, so you can easily find pictures of, say, cats or trees. And yes, you can even search for “cat up a tree”. In the library and information space, I’m cautiously optimistic that, now open access is becoming the norm, we will start to see things like serendipitous discovery of relevant research outputs based on text and data mining.
Read more from Martin in the next edition of ILI365.
Martin Hamilton will keynote at ILI next month, on Wednesday 17 October at 09:00
This Q&A was first published in CILIP's Information Professional, September 2018