ZSI asked me to kick-off the VSL blogosphere with a short reflection on my desktop diagnosis experience to date. I’ve structured the post in four parts: first, I share some words on my intentions as a social lab manager with the fortune and privilege of being full-time on the project. Second, I share some of the work I’ve been doing so far, what I think I’ve learned, and what I’m understanding about how to do the work. Finally, I close with thoughts on activities that might benefit from collective, cross-lab initiative.
Ilse asked that I limit my post to 500 words…but…as you can see…I did not do a good job of that. Sorry Ilse! 🙂 What I have done is broken out several sub-sections, however, and each of those is between 250 and 450 words, and should stand ok on its own, so you don’t have to read everything at once (or at all, although I hope you do, eventually). For what I hope will be everyone’s benefit, I’ve created a set of general diagnosis documents in Nextcloud (Literature > General Information for RRI Diagnoses, which includes material that I linked to earlier—just in a form that has been liberated from the web—and more); I reference some of these documents in the rest of this post.
I welcome your comments and the conversation that follows. Thank you for your time and consideration. I estimate that reading from here on will entail 12-16 minutes.
A note on my intentions
I view each of our individual social lab management efforts as part of a larger, collaborative endeavor. I see it as a team obstacle course, one in which we only ‘win’ when we all get to the ‘finish line’ in one piece, as a collective of social labs. In this course, there are key obstacles that we have to figure out how to overcome (e.g., how do we come up with a common criteria of selecting projects to analyze in our diagnosis), and key milestones that we all have to pass at the same time (e.g., finishing our individual social lab diagnoses so that our WP coordinators can synthesize H2020 “arm level” deliverables, like Excellent Science Diagnosis), so that Erich and IHS can, in turn, keep us in the good graces of our EC contacts (to whom we’ll undoubtedly need to turn for knowledge and experience to enrich our social labs). We’re all coming to the team race with different assets, skills, and experiences. For example, some people are only on the project as a small percentage of time, but bring a great depth of experience in research and/or practice. Others, like myself, do have our experiences as early-career scholars, but are on the project 100%. My hope is that we can help each other work together so that we benefit from our strengths and compensate for our weaknesses.
My hope is that, in writing this and future VSL posts, the lessons we are learning along the way help us all, as a team, overcome the obstacles ahead. This goes for anything I share—if it’s a new experience, a document discovery, or a reinforcement of something someone has already said, etc. This goes for whether it’s a “scouting for obstacles function” or a “moral support function” or a “let’s pause and come up with a plan for this obstacle function.” I am aspiring to see all of my actions and reporting in this dual way—feeding my experience into the experience of our collective. I am open to feedback on ways I can improve in trying to meet this aspiration in intentional action and learning.
What I’ve been up to
Managing the FET and FOOD labs, I find myself in the funny position of feeling like I need to learn how to do things as quickly as possible, because I know I’ll have to do everything twice. To that end, I’ve taken a jump into the deep end of FET diagnosis, so I thought I’d share a bit about what I’ve done and found so far.
- My approach has been to begin broadly and narrow down. This has meant starting by searching for references to FET (literally a few “control+F” searches for the work package) in EU regulation No 1291/2013; COM(2011)808, 809, 810, 811; and the Interim Evaluation of Horizon 2020. (See Nextcloud folder)
- I’m new to Europe, so I’ve also been reading into the general aspirations articulated for Horizon 2020 in these strategy-level documents. My aim is to use the notes I generate on these documents to flesh out some context in my FET diagnosis, and eventually, when I have to compile all WP2 diagnoses for Deliverable 2.1. (See callout note: What I learned about RRI in H2020 from the interim evaluation)
- Searching the interim evaluation, I’ve found that the most helpful place to get information about program activity to date is from the Commission staff’s working document: interim evaluation of Horizon 2020: Annex 2 (SWD(2017) 221 final).
My process has been to use the Diagnosis Template document to structure my note taking. The first thing I do when I open a new document is note the reference information, and give the reference a number—I use this number whenever I take a note, (and I also will note page numbers when I take full excerpts). In writing up the scoping section for the FET diagnosis, I found that the report can really write itself if I’ve been disciplined about putting the correct notes in the correct sections. This reading & note taking process is something I think will help me when I have to do everything over again for FOOD 🙂
Callout note: What I learned about RRI in H2020 from the interim evaluation
The Interim Evaluation of Horizon 2020 sheds considerable doubt on whether the 11% of Horizon 2020 projects claiming RRI relevance really are ‘instances where citizens, CSOs and other societal actors contribute to the co-creation of scientific agendas and scientific contents’ (pp. 64-65) At the project level, there are but a handful of cases of genuine coproduction (p 66). At the program level, there is little detail on how consultations, or the largest pan European consultation so far, for example CIMULACT, will be enacted in future research and innovation agendas (the strategic programming process) (p 59). Targeted search for RRI keys reveal:
- core civil society representation in programming is at once overburdened and under-engaged;
- gender remains poorly “understood and is often confused with gender balance in research teams”;
- social science and humanities integration remains scant, skewed towards economics, political science, and sociology over the full range of disciplines, and a mere 2.1% of the total H2020 spend;
- open access initiatives are progressing (60 to 68 % publications qualify), but issues persist with the remainder opting out citing intellectual property, personal data, national security, or other reasons;
- science education is not remarked upon at all, delegated to ERASMUS+ reporting;
- the word ‘ethics’ or ‘ethical’ appears only 6 times, and in no case related to content or deliberation.
There was an overall sense, as well, of the paucity of means to track indicators of RRI to understand impact (note, MORRI is working on this, and I have yet to read their deliverables).
At the end of the Interim Evaluation are a list of short-and long-term limitations identified for Horizon 2020. Some of these limitations seemed to me like they would lend themselves as opportunities that RRI or RI activities could help address. There might be a way to think about leveraging these limitations as inspiration for future our pilot actions:
Short term limitations (listed as action items)
- Build understanding of and capacity to engage cross-cutting issues of the program;
- Tie stronger feedbacks to policy;
- Accelerate sustainable development and climate targets;
- Improve depth and breadth of social science and humanities embedding, as well as advisory body and evaluation body gender balances;
- Engage users in agenda-setting;
Long term challenges (listed as action items)
- Clarify Framework- and Program- level intervention logics (links between impact, results, outputs, and outcomes);
- Enhance salience of indicators for public monitoring;
- Be more inclusive of and transparent with stakeholder involvement
- Increase involvement of ‘end-users’ in co-design of agendas;
- Align program and policy priorities with challenge-based approaches and with less work program fragmentation
- Establish “impact-focused mission-oriented approach to deliver on implementation of SDGs.”
Things on the HoRRIzon that we may benefit from thinking about together
I’m really sorry not to be able to join all of you in Brussels next week for the pre-conference social labs meeting. I’m presently on a short holiday back in the US for my brother’s wedding. Fern will be in Brussels, however, and I look forward to hearing how all the conversations go, and picking up the threads in future VSL posts, as well as in Vienna in November and December. Below are some issues that I think may benefit from cross-social lab conversations. Some of these are not new, and may likely be covered by Erich’s report on his conversations with us…but still:
- Within each program we analyze, what criteria will we use to select a subset of projects for our analyses. There are more than 100 FET projects and several hundred FOOD projects; a detailed analysis reading all of the periodic reporting documents on CORDIS, alone, might be a prohibitively time-consuming undertaking (to say nothing of coming to an agreement on what data we would want to analyze at the project level). What criteria can we all use to a) enable a representative and rigorous sample; b) make our jobs easier; c) enable cross-program comparisons in WP-level deliverables, and in any publications people wish to collaborate on?
- When it comes time to move beyond desk research, do we want to pursue a common interview protocol? I think this would be desirable from the perspective of supporting cross-program comparisons, as well as enhancing the rigor of our activities.
- If yes, what kind of protocol and with what questions (e.g., closed and structured; semi-structured; open conversation)?
- If yes, do we want to think about tailoring sections of the interviews to different audiences (e.g. We may want ask different questions of a researcher funded by our project, than one a EC DG program officer, or a participating CSO, etc).
- If no, perhaps we could at least agree on a handful of questions that all social labs ask.
- It’s becoming clear that many H2020 programs are designed with some element of cross-program activities. Do we want cross H2020 program elements to be reflected in the social labs? What does this mean for recruitment? What does this mean for having social labs interact, once they are formed?
- We may want to agree on a shared process for recruitment to ensure we don’t have separate social labs independently contact the same people to participate in different social labs.
- If we end up coming up with a lot of different activity that would benefit from cross-lab collaboration, we might want to experiment with informal working groups (e.g., some folks working together on interview protocol design; others on recruitment processes; then sharing out to the group for common edits and implementation).
- Finally, looking way out (too far out, possibly), being interested as a social science researcher, I would love for at least some of the social labs to think about doing the same kind of pilot actions, either in the same way or with some minor, intentional alterations (beyond varying host organization and participants). If you’re also interested in this notion of “cross-lab experimental designs”, let me know—it would be great to have a place where we can start chatting about this on the virtual social lab, over email and Skype, and in person, and keeping a record of ideas generated in the process.
Ok, that’s it! You made it to the end of the post! Hopefully you found at least one thing of use, and enjoyed the read.
Looking forward to working with all of you in the weeks, months, and years ahead.