[toc]
Discovery
The work of bringing together academics and policymakers is the primary focus of most research to date when exploring research-policy engagement (Oliver et al., 2022). This includes the work of disseminating results, making formal requests for evidence by government, and facilitating relationships between the two to build partnerships. This focus was reflected through the much larger number of proposed mechanisms in this paper but does not necessarily reflect a higher degree of importance. Ensuring effective design, delivery and review will also be vital for building better, more effective collaborations.
The work happening at this stage has many names, which can make it difficult to clarify the aims of collaboration. The work could be called research translation, knowledge transfer, diffusion, research impact or evidence uptake. These all refer to practices used by academics to promote completed research or disseminate results, rather than intending to informing policy (Cherney et al., 2015; Davies et al., 2008; Oliver et al., 2022). These uni-directional practices can be distinct to broader concepts of engagement and collaboration which aim to develop policy relevant research outcomes (Western, 2019). The public sector has similarly been accused of seeking research after the fact, with the primary goal of supporting decisions already made (Ferguson et al., 2014). Contract research can be limiting for those working within the University sector, particularly as contracts rarely allow for publication or accommodate theory generation - the backbone of promotion and success in academia (Oliver et al., 2014). The public sector is also claimed to work to different timelines and with the goal of looking for more definitive answers than is workable for their academic collaborators (Ferguson et al., 2014; Head, 2015).
A vast array of institutional and cultural practices have been recommended to address the challenges listed above. From the research (push) side there are arguments about designing and developing usable knowledge in partnership with policy makers (Western, 2019), using PhD internships or secondments to educate academics in how policy is made (Foxen & Bermingham, 2022), and building more extensive engagement platforms (Menon & Rutter, 2022). On the public sector side (pull) the focus appears to be on encouraging an interest in engaging with research in the first place (Ferguson et al., 2014; Uneke et al., 2015), or addressing the barriers that come with limited time or capability to engage with academic research findings (Cairney et al., 2023; Oliver et al., 2014).
Another view is that these differences lead to unspoken cultural barriers between the two. Bogenschneider et al (2019) suggest that academics might be better served if they see their research as forming part of a broader argument and acknowledging the role compromise and negotiation play in policy making. This is similar to the framing of evidence as something policy will be informed by, rather than something it is independently based upon (Head, 2016). Importantly however, this is likely only to be meaningfully achieved if there is a mutual trust between the two parties (Buick et al., 2016; Gardner et al., 2021). Bringing academics and policy makers together will not, alone, lead to better collaboration. The collaboration must be built on authenticity and trustworthiness (Mols et al., 2018).
To address the differences between the two communities, sometimes a third party is also presented as a way of brokering relationships and translation. Commonly known as knowledge brokers, these actors are presented as a possible solution to navigating the barriers between the two worlds. These can take many institutional forms, such as external and independent like the UK’s What Works Centres (https://www.gov.uk/guidance/what-works-network) or supported by universities in the form of a Policy Lab or Research Institute. Government funded bodies such as the Australian Institute for Health and Welfare or the Indigenous Mental Health and Suicide Prevention Clearinghouse also do important brokerage work. Some internal government teams, such as that of the Behavioural Insights Teams (in their varied forms), can be effective in bringing in expertise and facilitating projects and partnerships. They do many different types of work, brokering relationships, disseminating and translating ideas and facilitating the development of research networks (Auld et al., 2023; Bandola-Gill & Lyall, 2017; Bornbaum et al., 2015; Knight & Lightowler, 2010). However, brokerage is a broad and complex idea in and of itself and not a panacea.
These ideas point to opportunities to develop more effective collaborative partnerships between the APS and academia, but also highlight some of the ongoing challenges. Many of these debates are long-standing, with concerns dating back to the very development of the policy sciences but these issues are surmountable with time.
Design
The majority of initiatives developed on academic and public sector collaboration have been focused on discovery. How can academics better communicate their research? How can academics find out what problems matter to the public sector? How can the two work together more effectively? Unfortunately, this means very little focus is given to how collaborations could or should be designed, delivered and reviewed. The following sections will explore these ideas in more detail.
Several conditions can help to facilitate better collaboration during design. The first is addressing any asymmetry in power relationships. If there are imbalances in “capacity, organization, status, or resources to participate, or to participate on an equal footing with other stakeholders, the collaborative… process will be prone to manipulation by stronger actors” (Ansell & Gash, 2008, p. 551). This means groups that are more cohesive and organised will have more power when shaping a collaborative project. Academics, especially individuals who are negotiating independently, will likely be at a disadvantage. Individuals from ATSI, CALD, LGBTQI+ communities and those undertaking research which represent these communities (and other disadvantaged demographic groups) could find this particularly challenging. Some universities and research groups have more structured ways of engaging and these groups may experience an advantage in negotiating partnerships, however, by and large, the public sector will have more power in these collaborative arrangements. Addressing this imbalance is an important component of the design of a collaboration.
Addressing the power imbalance will also need to include a consideration of how these partners will benefit from the collaboration. Do they have the right skills, time and capacity to participate? It is an ethical question to ask whether all parties benefit (Sullivan, 2022, p. 109). Making policy better is a laudable goal, but academic collaborators, particularly from less privileged groups, have a balance of commitments that should not be assumed as less important. Equally, research which does not address a clear public problem (but is perhaps of academic interest), should not be prioritised in this context. For example, Newman (2011) wrote about her experience working at the boundary of research and academia – noting that the challenges included:
struggles to overcome the limits of specification in order to do what we considered to be useful research, and concerns about how we might use the opportunities offered by commissioned research to advance thinking and theory building in the field – [which was] an almost impossible process given the need to focus on writing reports for funders and bidding for the next project. (Newman 2011, 475)
This was in opposition to her experiences working within government where, the short, sharp, but inevitably simplifying messages tended to be welcomed by policy actors, although their production tended to be strongly resisted by researchers not only on the basis of the incompleteness of the evidence but also since they wanted the complexity and subtlety of the argument, and the qualifications and caveats surrounding the evidence, to be fully recognised. (Newman 2011, 475)
Understanding the existence of these disparities from the outset, will allow for a design which provides opportunities for both.
This highlights that interdependence is another important condition. If groups feel that they can more effectively achieve their goals independently, they are less likely to collaborate successfully (Bryson, Crosby, and Stone 2015). It is important that both parties see the other as a contributor to success, rather than a gatekeeper to the necessary data or a stamp of legitimacy (Boswell, 2009). A common frustration for academic partners is the feeling that their work ultimately fails to shape the policy decisions made (Boswell 2009). Incentives increase as stakeholders see a direct relationship between their participation and concrete, tangible, effectual policy outcomes (Brown 2002). But they decline if stakeholders perceive their own input to be merely advisory or largely ceremonial (Futrell 2003).
Some countermeasures which have been argued to support collaboration include:
- Transparency of contracts (MOU’s), processes and information
- Willingness to be challenged or, at least open to diverse interpretations (non-antagonistic debate)
- Follow-through and consistent information sharing (Ansell & Gash, 2008; Bryson et al., 2015)
In addition, the role played by neutral leadership or knowledge brokers is also considered a key component for facilitating collaborative engagement more broadly (Chrislip and Larson 1994; Ozawa 1993; Pine, Warsh, and Maluccio 1998; Reilly 2001; Susskind and Cruikshank 1987). They can provide assistance in setting and maintaining clear ground rules, building trust, facilitating dialogue, and exploring mutual gains (Ansell & Gash, 2008, p. 554). Leadership should also include direction over the ethical, emotional and experiential elements of a collaboration – although these foundational elements are often not discussed in detail (Sullivan et al., 2012). Navigating the values, meanings and beliefs that exist in a collaborative project will require taking the time to explore conflict and openly discuss how these differences play a role in shaping what ‘matters’ to each party.
In addition to leadership, the question of ownership is also important. Ownership allows for clear lines of accountability. Defining who will do what and when, makes expectations clear and can be the basis for the development of greater trust and cooperation (van der Arend, 2014). Ownership could be supported by a Charter of Partnerships similar to what was recommended in the Independent Review of the APS. The Charter of Partnerships could encourage the establishment of clear expectations — for government, the APS and the community — on how the APS will work with its external partners. Premised on the understanding that current engagement is insufficient, the Charter will be a public commitment to work openly and respectfully, to be willing to learn and listen, to inform and be informed. It will set expectations of being a good partner with the APS, as this relationship cannot just be a one-way street. (p119)
The literature also suggests that clear ground rules and process transparency are important design features (Ansell & Gash, 2008, p. 556). Process transparency means that stakeholders can feel confident that the public negotiation is ‘‘real’’ and that the collaborative process is not a cover for bargaining done behind closed doors (Dunlop & Radaelli, 2016). Process transparency means having a clear line of sight between input and output. However, this also requires academic collaborators to be well informed about the nature of the policy process, and the circuitous and slow impact that new information and research can have (Weiss, 1986).
Process transparency, the role of leadership and accountability, all highlight the important role played by trust in collaboration –particularly when past relationships have been antagonistic or fractious. If trust doesn’t exist already, it will need to be built(van der Arend, 2014). It is critical that power imbalances, language barriers and different institutional expectations are addressed outright in discussions.
Finally, openly acknowledging and finding ways to work with the institutional and structural expectations of both parties is important. This includes the frameworks, norms and rules that exist and cannot be assumed to be well understood. For the public sector, changes in government direction and interest, resourcing, framing of language and timeframes for clearance and critical dates such as Senate Estimates and Budget are all critical but implicit institutional knowledge. For the research sector, publishing pressure, semester dates, grant applications and the expectation to ‘show your work’ in the form of extensive literature reviews (like this one!) are also key to institutional expectations, but also, not well understood outside of the academic sphere. The politics of both institutions, and the norms and rules that exist, are equally important to the involved parties and ignoring them is an exercise in futility. Communicating these politics explicitly, clearly and transparently, can only improve collaboration. A prehistory of conflict is likely to express itself in low levels of trust, which in turn will produce low levels of commitment, strategies of manipulation, and dishonest communications” (Ansell & Gash, 2008, p. 553).
These establishing practices all involve a meaningful commitment of time and resources. Ansell and Gash (2008) note that if partners “cannot justify the necessary time and cost, then they should not embark on a collaborative strategy” (p559). Arguably, it is better to consult or use internal research capacity than to undertake a collaboration without taking the time to establish the relationship.
Delivery
Much has been written about collaborative governance and the ways to achieve more effective outcomes from these partnerships. These elements are sometimes referred to as antecedent conditions for collaborative governance. This section discusses the elements of interest for the actual delivery of the project.
Collaborative governance is defined, at least in part, by principles of consensus-oriented decision making (Ansell & Gash, 2008). This is an important consideration, as consensus is not necessarily a required component of an academic collaboration. However, the ideal of a collaborative relationship is one which goes beyond just consultation. The agency is ultimately held responsible for policy outcomes in normal decision making. Collaborative governance shifts ‘‘ownership’’ of decision making from the agency to the stakeholders acting collectively. This speaks to the interdependence noted above under design.
The need to define the ultimate goal of a collaborative project involves having discussions on the role of decision making. A common frustration for academic partners is the feeling that their work ultimately fails to shape the policy decisions made. Ensuring there is a clear understanding of how the research will and will not be used, and contingencies for when consensus is not able to be reached, are essential discussions during the project design phase.
Two considerations from policy theory are also of interest here - policy learning and policy translation. They both capture a similar idea – that the instrumental, rational assumption of how knowledge travels in policy making is flawed. As Cairney (2020) states, we need to “reject the temptation to describe policy learning simplistically, with reference to a process that we might associate incorrectly with teachers transmitting facts to children. Nor should we assume that adults simply change their beliefs when faced with new evidence” (208). Rather, policy actors are more likely to generate learning through the process of engaging with new ideas, from diverse sources and forms of information. Knowledge is contextual and contingent and developed in negotiation with what is already ‘known’. The ability to see this negotiation, or translation, as a part of the process, rather than as a loss of fidelity of the research, will be important for academic partners in collaborative projects (Mukhtarov & Daniell, 2016; Stark, 2019).
Some more concrete lessons from the literature include the importance of regular meetings, and establishing short-term goals (small wins) where parties can build trust and demonstrate competency (Ansell & Gash, 2008; Bryson et al., 2015; Weick, 1984). The short-term or intermediate outcomes of these projects may represent tangible outputs in themselves, but they are also essential for building the momentum that can lead to successful collaboration (Ansell & Gash, 2008). Despite this, these smaller scale outputs are not often well supported by the University sector and this is a challenge which will need to be addressed explicitly in many collaborative projects. As noted by Sasse & Haddon (2019), “engaging with policy making is typically not a directly funded activity [for academics], which makes it hard for academics to buy themselves out of their other commitments” (16). This is an important consideration when defining the preferred short-term goals. Are there resources available within the Department to provide support with communications? It is also an important goal for the University sector to find ways to incentivise policy impact as an alternative to a more ‘typical’ path to promotion.
Staff turnover in the public sector also impacts effective collaboration. This is perhaps most clearly acknowledged in a report by the Government Office for Science on Engaging with academics. The report states that “a successful collaboration may be forgotten when the key contact moves on, leading to duplication of research effort and a lack of awareness as to what is going on in other teams” (2013, p. 25). To address this, they note that several departments have produced things such as “databases of stakeholders and academics and some have database of reports they have produced” (25). However, as is noted above regarding policy learning and translation, knowledge is often generated over time and through personal relationships and experience. Having the information available is not, on its own, enough. Knowledge management is only one step, albeit an important one.
Review
There is little research done on the actual practices of academic-public sector collaboration and knowledge translation activities (Oliver, 2022). To date, the focus is generally on the outcomes of the collaborative project itself, the program, rather than the process of collaborating. This is despite the fact that ‘process’ success is an important element in the success framework (Bovens et al., 2002; McConnell, 2010).
It is also generally agreed that evaluative assessments should consider exploring the process for individual participants, member organisations, the collaboration as a whole and the community (Bryson et al., 2015, p. 649) but it is generally only undertaken as a small part of a broader evaluation and is often not published more broadly.
The resistance to undertaking this style of evaluation is highlighted in a report by Bray, Gray and t’Hart (2019) which informed the Independent Review of the APS. While this document is focused on evaluation of programs these tools will also be useful for interrogating the collaborative process. Bray, Gray and t’Hart offer a suite of complementary options that could be explored by the APS, which are worthy of consideration during this process as well.
The building up of critical-incident and near-miss reporting systems, particularly within delivery and regulatory agencies. These can be modelled on good practices currently extant within, among others, the process industries, the aviation sector and hospitals.
Adopting ‘whole system in the room’ debriefs. These are carefully prepared and facilitated Chatham House rule exercises where critical cases (near misses, explicit failures, ongoing or ad hoc instances of high performance) are reconstructed and reflected upon, drawing on the perspectives of designers, (co-)producers, deliverers, and targets/recipients of policies and programs. The focus lies on what may be learned from the experience, by whom and how this learning can be actioned.
‘Learning from our stakeholders’ exercises. These can take the shape of focus group or fishbowl sessions. In these sessions, clients, stakeholders (including otherwise ‘soft voices’ in the sector), and independent experts of policies and programs are explicitly encouraged to articulate their experience of tensions, disappointments and frustrations with an agency, as well any highly positive, constructive and impressive performances by the agency. Agency representatives observe but do not speak, let alone defend, during these sessions. The feedback obtained from these sessions can be compiled, analysed and used to craft unit, program and agency level ‘Learning from our Stakeholders’ reports. These reports can be used to feed into strategy, innovation, design and improvement processes within the agency.
Expanding and better using existing routines of recognising professional achievement. Awards events and competitions could be organised in a tiered, multilevel fashion (from branch to agency to APS systematic level). They could be designed and leveraged not just to put a positive spotlight on certain high-performing and dedicated individuals and teams, but to generate a series of standardised case narratives describing the nature, operative mechanisms and boundary conditions of successful performance. Such leveraging can take several forms, such as: – staging annual agency-level Learning Festivals open to all staff, where success cases are presented, subjected to ‘critical friend’ scrutiny and form the basis of ‘lesson-drawing’ workshops; – presenting award-winning cases from across the APS at dedicated APSC-run Learning Conferences or IPAA/ANZSOG annual conferences open to the public.
Widely disseminating conference proceedings featuring both individual case histories as well as comparative, thematic and lesson-drawing reflections by commissioned observers, across and beyond the APS.
Training evaluators across the APS in the methodology and tools of positive policy evaluation, as well as encouraging their use by other review and accountability bodies such as the Auditor-General, the Ombudsman, and (through inculcation in the Ministerial and Parliamentary Services division) the Senate and House. These tools include: Appreciative Inquiry, the Success Case method, the Most Significant Change method, tracking and analysing instances of Positive Deviance, and Developmental Evaluation strategies. (24-25)
Undertaking this reflective work, and ideally making some of the lessons learned publicly available, will assist in the practice of stewardship. Again, turning to Bray, Gray and t’Hart (2019) it is clear that
it needs to become both normal and safe within the APS to forensically – methodically, dispassionately – take stock and ‘look back’ at how and how well policies, programs and projects are performing; to actively seek out voices from clients, stakeholders and critics; to ask hard questions about what is valuable and what is not; and to re-examine beliefs and assumptions on which policy decisions were made and programs were designed in light of the subsequent experiences after they were put into practice. (26)
For this to be most effective, it should also involve a detailed process of evaluating how it undertakes the process of collaboration and what lessons it can learn to improve moving forward.