The Art and Science of Qualitative Data Collection:
What is impact? According to the experts at the Impact Management Project, impact can be broken down into five dimensions: what, who, how much, contribution, and risk. As an M&E Fellow working with MIT D-Lab Scale-Ups Fellow Kwangu Kwako, Ltd. (KKL), this summer marked an opportunity to dive deeper into the question of impact along these five dimensions. Particularly, the organization wanted to know how moving to upgraded housing affects tenants in other areas of their lives. Through a series of face-to-face interviews with KKL’s customers and beneficiaries, I have begun to shed light on this difficult question.
The United Nations Sustainable Development Goals, the Universal Declaration of Human Rights, and the International Covenant on Economic, Social and Cultural Rights all recognize the importance of access to adequate housing. Research indicates that substandard housing affects residents’ health and education outcomes as well as their quality of life. KKL offers a cost-effective pathway to achieve better housing for all. But to what extent was KKL’s intervention delivering on these outcomes?
As collecting experimental data on these sectors can be quite challenging and expensive, our survey endeavored to gather information that would give us an idea of whether we were heading in the right direction, instead. We leaned heavily on self-reported data on poverty probability, global self-worth, customer satisfaction, social inclusion, and health, and sought to identify before and after trends.
One lesson I have learned through this process is that the process of data collection is an art as well as a science. Even the perfect survey (and if it exists, it is not the one that I made) can be derailed by unpredictable, interpersonal dynamics. Who would have thought that recording video footage could possibly bias people’s responses? (Turns out, everyone but me.)
Another takeaway for me was that format matters. I had designed the survey using Google Forms, knowing that they would be administered orally. I did not account for the nuances that would be lost in translation in this process, however. The visual component of written questions, such as a 1-10 scale or a five-point Likert scale, does not convey orally nearly as well as it does on paper or on a screen. In response, since there is not much practical value in distinguishing “Disagree” from “Strongly Disagree,” for instance, we simplified these response choices to remove the extremes on both ends.
Once we replaced me with more competent enumerators, Winnie and Milka, our data collection sped up and smoothed out considerably. We removed the power dynamics of gender and nationality and facilitated a more direct process via native Swahili speakers.
At the end of a summer full of “no duh” moments such as these, I have learned a lot and look forward to seeing what the surveys reveal.