This week I begin my self-study journey at City LIS, diving into the field of Children and Adolescent Libraries and Collections. I have developed some guiding questions for my inquiry under the unifying central idea:
Children’s libraries are unique communities designed to meet specific needs.
Form: What makes Children and Adolescent library collections and services unique?
Function: What is the role of CaALCs librarians in public and school libraries?
Change/Perspective: What are the challenges and issues facing CaALCs both historically and in the near future? Representation, community building, de-colonising the collection, inquiry learning in the library,
Causation/Responsibility: What is the impact of children’s libraries?
As I progress into the discovery phase I am aware that my inquiry might take an unexpected turn, throw up some left-field ideas and (hopefully) challenge my misconceptions. I look forward to it. I would also like to hear from those of you in the LIS and educator community, whether you work in this specific sector or not, your ideas and suggestions are appreciated. I am interested in open dialogue and am here in this space to engage with positive intent and listening heart.
When the latest grouping of lecture topics were initially introduced, I was highly sceptical about how coding, text visualization and APIs were part of librarianship. As a technology integrationist, I was at least aware of most of the main ideas and skills sets, though APIs gave me pause. What changed my mind and helped me to understand how these practices fit into the library science landscape were the practical exercises in cleaning up data, creating and analyzing data sets from Twitter and doing my own analysis of texts using visualizing tools.
In my previous post, I spoke about the challenges of starting from scratch with my catalogue as the collection had not been properly maintained for around 7 years, and much of the metadata was missing or unsearchable. After reading Helen Williams’ account of how LSE tidied up their catalogue (2010), I kicked myself for not thinking of it sooner! Of course, I could have downloaded the entire catalogue as a CSV (it’s not that big) and used simple algorithms from Regex to identify and correct the dodgy data. There are some challenges with that as I share the catalogue with the Secondary library, but with filtering, I could just look at the files relevant to the Primary division. I am planning on doing just that once I have completed scanning the collection to remove some of the locations from our old building that are still plaguing the database and create more searchable metadata.
While I’d heard of APIs before, I’d not really understood what they were and how they could benefit learners. I can see now how libraries can future-proof themselves by creating data sets through API tools, gathering social media interactions around events or people, or using web scraping to collect large-scale data for later analysis. The idea mentioned by Albertson (2019) that engaging with these kinds of data sets “is a way of measuring the reach and impact of scholarly communication” that goes beyond citation searching as you can search for the type of impact through hashtag and retweet analysis, was very interesting to me and I wonder what influence this will have in real-world terms e.g. will reach equate to additional research funding, salary rises, or will it become a performance indicator on appraisals (perish the thought!)
As for text mining and visualization, I have been using Wordles and the like with students for about 10 years as a means of helping them identify key themes and characters in a text, or to find excessive repetition in their own writing, but I hadn’t seen the application of this fun, and pretty, tool to large scale data before. I found using Voyant to compare texts very interesting and thought this would be very useful for my students to assess originality or understand the emergence of ideas over time.
There is so much potential for these types of tools to demonstrate understanding, to compare and contrast ideas and timeframes, and to stimulate curiosity and engagement. This is most evident in the Digital Humanities field. Reading about the Venice Time Machine Project (Abbott 2017) fired my imagination and helped me to see the value AI, machine learning and automation of labor-intensive tasks has for the library and information sciences. I am really excited to see how these data-rich projects will be used for education, potentially through gamification, to help immerse learners in worlds from long ago to better understand them and themselves.
Abbott, A. (2017) “The ‘time machine’ reconstructing ancient Venice’s social networks,” Nature, 546(7658), pp. 341–344.
In the DITA module thus far, the value of good-quality metadata in creating searchable, descriptive and efficient catalogues has been strongly emphasised; particularly the role metadata has in cataloguing non-traditional documents (eg. not books or written texts). While this may appear to be obvious when you think about it, the challenge of developing systematic metadata is one I have been grappling with in my day to day work.
Two years ago the entire catalogue for my school’s collection was lost – this was before I was the librarian and the catalogue had been stored on an iPad. There were reasons for this, no dedicated librarian or resource manager, budgetary constraints that meant a free app was the easiest way to track books for staff who had many other hats to wear. When the account for the app was closed, all the data was lost. I was in the process of negotiating a new, affordable, cataloguing system when COVID hit, and like most of the country I started working from and the focus shifted from our library collection to online resources. When we migrated the catalogue at the secondary division across, remnants of the old primary school catalogue were there, but the metadata on location, format types etc were inaccurate and needed to be purged completely. By the time I got the library up and running, the City LIS course was in full swing and the session on metadata couldn’t have been more perfectly timed.
Having taught web design at a secondary school level using Dreamweaver in about 2006, I wasn’t completely clueless about what metadata was, I just didn’t really understand it in the bibliographic context. The plethora of acronyms for bibliographic standards and how the records are developed were very overwhelming at the beginning, but as I read more and asked deeper questions, the history and context began to emerge. What I found really interesting was how the different systems or methods for standardisation developed from AARC through to BIBFRAME (Billey 2015), and that this is a constantly evolving practice and theoretical framework that inform each other. The relationship between how we use metadata and how we record it seems to change over time (Lybarger 2018) depending on what we need to do with it now, but also in the future.
I am averaging 300 books per week at the moment, cataloguing furiously in the time slots I have available between library lessons, information literacy sessions, planning meetings, coaching sessions and the ubiquitous supervision duties. Not bad for three days a week. WorldCat has been invaluable in developing good-quality bibliographic records as my cataloguing system brings down some metadata, such as titles, author, sometime the book type and a thumbnail of the text, but not all. My focus has been on identifying the keywords and summaries that make a catalogue searchable for users. Unfortunately, the OCLC metadata subscription service is financially out of reach for my tiny library. I am grateful that the records and metadata is available in a standardised format via WorldCat – I just have to enter the data manually, which time consuming but ultimately worth it. Why? because metadata underpins our information seeking, whether online or in a library catalogue, and ‘good enough’ metadata isn’t really good enough.
When completing the initial reading for DITA, it occurred to me that it is very easy to either grossly underestimate or be extremely paranoid about the way in which information underpins our modern lives depending on how you view the world. From CCTV to search histories, eye tracking software to our shopping habits: information is being recorded about us every moment of the day.
The bigger questions about who is recording the data, how they are using and whether I am waiving my right to privacy by simply engaging in the digital world often feel overwhelming and too difficult to comprehend. I wonder too if my behaviour is influencing choices made by the data collectors, or if I am the one being influenced? As a teacher, I am constantly learning, evolving, and working hard to ensure that my students are equipped with necessary skills sets and critical thinking processes to navigate the complex and fast changing world we live in. I think of myself generally as pretty savvy when it comes information literacy but have felt out of my depth with the terminology at the academic level and it took a lot thinking and reading to feel like I am beginning to grasp the nuances (which I’m still not sure I have).
In David Beer’s 2018 article, ‘Data and political change’, the idea of data and technology driving the democratic process, where people are simply playing out ideologies they believe to be their own, is posited in reference to the writings of early 20th century sociologist Georg Simmel. What struck me was how prescient Simmel was in identifying how today’s algorithms, personalised and on demand, would create an echo chamber that configures what Simmel called ‘fragments’ or data into a whole cloth; a complete world view that doesn’t easily allow for dissent or alternatives. I see this played out on social media when myself and my connections were caught completely off guard by the result of the 2016 Brexit referendum and subsequent victory of Donald Trump in the US presidential election, because our world view is formed by those we choose to ‘follow’ and the personalised nature of advertising and recommendations on social media sites. All the exit polls, news articles and advertisements displayed for me online were very much anti Brexit and pro Hilary Clinton.
Interestingly, a new study into why people might not be truthful with telephone pollsters (Are Election 2020 Poll Respondents Honest About Their Vote?, Litman et al 2020) may also help explain the extent of this ‘echo-chamber’ phenomena. In this study people were asked about whether or not they were truthful when questioned about who their preferred candidate was. Those who were untruthful cited fears over a lack of anonymity leading to “reprisal and related detrimental impact to their financial, social, and family lives should their political opinions become publicly known.” If this has some transfer to the sphere of social media, where the degree of scrutiny over our public data can have a serious impact on our personal and work life, then I have serious concerns over polarisation of data and its power to persuade and influence people. Irrespective of what political or social viewpoints they hold, shouldn’t people have the right to express their point of view without fear as long as it is done respectfully? Could data actually be the end of freedom of speech?
In our second lecture for DITA, we looked at how technologies that have arisen from the development of computers have had many benefits. From Ada Lovelace’s initial programming punch-card system (yes, it is programming!), to the tiny micro computers we put pockets called mobile phones, the opportunities for creativity, information sharing, the implications for a wide range of industries, and the capacity to solve complex problems quickly and with multiple considerations is immense and cannot be underestimated. Linking this development to the ideas presented in the first session, I found Jer Thorpe’s view very interesting that thinking about systems rather than data “helps us to solve problems more efficiently… to more deeply understand (and critique) the data machinery that ubiquitously affects our own day to day lives” (‘You say data I say system’, 2017). Systems, to me, are more dynamic and seem to evolve depending on what people need them for, which implies human input is a greater part of the process than the dystopian predictions of Georg Simmel may have shown.
It is clear to me that the role of the information professional in building information literacy and critical thinking capacity in individuals and organisations is essential for a data driven society to function well. Though I’m not sure how this be can done when it sometimes feels as though the system itself is fighting against critical thinking (fake news, advertising, political agendas), where people (including myself) are exhausted from information overload and only exposed to a particular world view. I feel that this is a problem I will be thinking about for a long time and asking myself what pieces of the puzzle am I missing, and what have I got wrong, because being aware of the problem doesn’t put me outside of it.
I begin this blog as a CITY LIS student studying for my Masters in Library and Information Science. I am a qualified teacher with experience in both primary and secondary education, largely working in the international school sector. I transitioned to a library & technology role this year, which I am enjoying thoroughly, but making the balance between studies, work and home healthy and sustainable will definitely be a challenge. Stay tuned for reflections on my course work and general library business!