Publications

2012
Hoffman, S.J. & Røttingen, J.-A., 2012. Assessing Implementation Mechanisms for an International Agreement on Research and Development for Health Products. Bulletin of the World Health Organization , 90 (11) , pp. 854-63. PDFAbstract

The Member States of the World Health Organization (WHO) are currently debating the substance and form of an international agreement to improve the financing and coordination of research and development (R&D) for health products that meet the needs of developing countries. In addition to considering the content of any possible legal or political agreement, Member States may find it helpful to reflect on the full range of implementation mechanisms available to bring any agreement into effect. These include mechanisms for states to make commitments, administer activities, manage financial contributions, make subsequent decisions, monitor each other's performance and promote compliance. States can make binding or non-binding commitments through conventions, contracts, declarations or institutional reforms. States can administer activities to implement their agreements through international organizations, sub-agencies, joint ventures or self-organizing processes. Finances can be managed through specialized multilateral funds, financial institutions, membership organizations or coordinated self-management. Decisions can be made through unanimity, consensus, equal voting, modified voting or delegation. Oversight can be provided by peer review, expert review, self-reports or civil society. Together, states should select their preferred options across categories of implementation mechanisms, each of which has advantages and disadvantages. The challenge lies in choosing the most effective combinations of mechanisms for supporting an international agreement (or set of agreements) that achieves collective aspirations in a way and at a cost that are both sustainable and acceptable to those involved. In making these decisions, WHO's Member States can benefit from years of experience with these different mechanisms in health and its related sectors.

Hoffman, S.J. & Røttingen, J.-A., 2012. Be Sparing with International Laws. Nature , 483 (7389) , pp. 275. PDF
Hoffman, S.J. & Sossin, L., 2012. Empirically Evaluating the Impact of Adjudicative Tribunals in the Health Sector: Context, Challenges and Opportunities. Health Economics, Policy & Law , 7 (2) , pp. 147-74. PDFAbstract

Adjudicative tribunals are an integral part of health system governance, yet their real-world impact remains largely unknown. Most assessments focus on internal accountability and use anecdotal methodologies; few, studies if any, empirically evaluate their external impact and use these data to test effectiveness, track performance, inform service improvements and ultimately strengthen health systems. Given that such assessments would yield important benefits and have been conducted successfully in similar settings (e.g. specialist courts), their absence is likely attributable to complexity in the health system, methodological difficulties and the legal environment within which tribunals operate. We suggest practical steps for potential evaluators to conduct empirical impact evaluations along with an evaluation matrix template featuring possible target outcomes and corresponding surrogate endpoints, performance indicators and empirical methodologies. Several system-level strategies for supporting such assessments have also been suggested for academics, health system institutions, health planners and research funders. Action is necessary to ensure that policymakers do not continue operating without evidence but can rather pursue data-driven strategies that are more likely to achieve their health system goals in a cost-effective way.

Hoffman, S.J. & Frenk, J., 2012. Producing and Translating Health System Evidence for Improved Global Health. Journal of Interprofessional Care , 26 (1) , pp. 4-5. PDF
Hoffman, S.J. & Rizvi, Z., 2012. WHO's Undermining Tobacco Control. The Lancet , 380 (9843) , pp. 727-8. PDF
2011
Lavis, J.N. & Hoffman, S.J., 2011. Dialogue Summary: Addressing Health and Emerging Global Issues in Canada, Hamilton, Ontario, Canada: McMaster Health Forum. PDF
Hoffman, S.J. & Lavis, J.N., 2011. Issue Brief: Addressing Health and Emerging Global Issues in Canada, Hamilton, Ontario, Canada: McMaster Health Forum. PDF
Hoffman, S.J. ed., 2011. Student Voices 2: Assessing Proposals for Global Health Governance Reform, Hamilton, Ontario, Canada: McMaster Health Forum. PDF
Hoffman, S.J. ed., 2011. Student Voices 3: Advocating for Global Health through Evidence, Insight and Action, Hamilton, Ontario, Canada: McMaster Health Forum. PDF
Hoffman, S.J., et al., 2011. Assessing Healthcare Providers' Knowledge and Practices Relating to Insecticide-Treated Nets and the Prevention of Malaria in Ghana, Laos, Senegal and Tanzania. Malaria Journal , 10 (363) , pp. 1-12. PDFAbstract

Background: Research evidence is not always being disseminated to healthcare providers who need it to inform their clinical practice. This can result in the provision of ineffective services and an inefficient use of resources, the implications of which might be felt particularly acutely in low- and middle-income countries. Malaria prevention is a particularly compelling domain to study evidence/practice gaps given the proven efficacy, cost-effectiveness and disappointing utilization of insecticide-treated nets (ITNs).

Methods: This study compares what is known about ITNs to the related knowledge and practices of healthcare providers in four low- and middle-income countries. A new questionnaire was developed, pilot tested, translated and administered to 497 healthcare providers in Ghana (140), Laos (136), Senegal (100) and Tanzania (121). Ten questions tested participants' knowledge and clinical practice related to malaria prevention. Additional questions addressed their individual characteristics, working context and research-related activities. Ordinal logistic regressions with knowledge and practices as the dependent variable were conducted in addition to descriptive statistics.

Results: The survey achieved a 75% response rate (372/497) across Ghana (107/140), Laos (136/136), Senegal (51/100) and Tanzania (78/121). Few participating healthcare providers correctly answered all five knowledge questions about ITNs (13%) or self-reported performing all five clinical practices according to established evidence (2%). Statistically significant factors associated with higher knowledge within each country included: 1) training in acquiring systematic reviews through the Cochrane Library (OR 2.48, 95% CI 1.30-4.73); and 2) ability to read and write English well or very well (OR 1.69, 95% CI 1.05-2.70). Statistically significant factors associated with better clinical practices within each country include: 1) reading scientific journals from their own country (OR 1.67, 95% CI 1.10-2.54); 2) working with researchers to improve their clinical practice or quality of working life (OR 1.44, 95% CI 1.04-1.98); 3) training on malaria prevention since their last degree (OR 1.68, 95% CI 1.17-2.39); and 4) easy access to the internet (OR 1.52, 95% CI 1.08-2.14).

Conclusions: Improving healthcare providers' knowledge and practices is an untapped opportunity for expanding ITN utilization and preventing malaria. This study points to several strategies that may help bridge the gap between what is known from research evidence and the knowledge and practices of healthcare providers. Training on acquiring systematic reviews and facilitating internet access may be particularly helpful.

Hoffman, S.J., 2011. Ending Medical Complicity in State-Sponsored Torture. The Lancet , 378 (9802) , pp. 1535-7. PDF
Hoffman, S.J. & Røttingen, J.-A., 2011. A Framework Convention on Obesity Control?. The Lancet , 378 (9809) , pp. 2068. PDF
Hoffman, S.J., Pogge, T. & Hollis, A., 2011. New Drug Development. Lancet , 377 (9769) , pp. 901-2. PDF
Hoffman, S.J. & Pogge, T., 2011. Revitalizing Pharmaceutical Innovation for Global Health. Health Affairs , 30 (2) , pp. 367. PDF
2010
Gilbert, J.H.V., et al., 2010. Framework for Action on Interprofessional Education and Collaborative Practice. Geneva: World Health Organization. PDFAbstract

At a time when the world is facing a shortage of health workers, policymakers are looking for innovative strategies that can help them develop policy and programmes to bolster the global health workforce. The Framework for Action on Interprofessional Education and Collaborative Practice highlights the current status of interprofessional collaboration around the world, identifies the mechanisms that shape successful collaborative teamwork and outlines a series of action items that policy-makers can apply within their local health system. The goal of the Framework is to provide strategies and ideas that will help health policy-makers implement the elements of interprofessional education and collaborative practice that will be most beneficial in their own jurisdiction.

Guindon, E.G., et al., 2010. Bridging the Gaps Among Research, policy and Practice in Ten Low- and Middle-Income Countries: Development and Testing of a Questionnaire for Health-Care Providers. Health Research Policy and Systems , 8 (3) , pp. 1-9. PDFAbstract

Background: The reliability and validity of instruments used to survey health-care providers' views about and experiences with research evidence have seldom been examined.

Methods: Country teams from ten low- and middle-income countries (China, Ghana, India, Iran, Kazakhstan, Laos, Mexico, Pakistan, Senegal and Tanzania) participated in the development, translation, pilot-testing and administration of a questionnaire designed to measure health-care providers' views and activities related to improving their clinical practice and their awareness of, access to and use of research evidence, as well as changes in their clinical practice that they attribute to particular sources of research evidence that they have used. We use internal consistency as a measure of the questionnaire's reliability and, whenever possible, we use explanatory factor analyses to assess the degree to which questions that pertain to a single domain actually address common themes. We assess the questionnaire's face validity and content validity and, to a lesser extent, we also explore its criterion validity.

Results: The questionnaire has high internal consistency, with Cronbach's alphas between 0.7 and 0.9 for 16 of 20 domains and sub-domains (identified by factor analyses). Cronbach's alphas are greater than 0.9 for two domains, suggesting some item redundancy. Pre- and post-field work assessments indicate the questionnaire has good face validity and content validity. Our limited assessment of criterion validity shows weak but statistically significant associations between the general influence of research evidence among providers and more specific measures of providers' change in approach to preventing or treating a clinical condition.

Conclusion: Our analysis points to a number of strengths of the questionnaire - high internal consistency (reliability) and good face and content validity - but also to areas where it can be shortened without losing important conceptual domains.

Cameron, D., et al., 2010. Bridging the Gaps Among Research, Policy and Practice in Ten Low- and Middle-Income Countries: Development and Testing of a Questionnaire for Researchers. Health Research Policy and Systems , 8 (4) , pp. 1-8. PDFAbstract

Background: A questionnaire could assist researchers, policymakers, and healthcare providers to describe and monitor changes in efforts to bridge the gaps among research, policy and practice. No questionnaire focused on researchers' engagement in bridging activities related to high-priority topics (or the potential correlates of their engagement) has been developed and tested in a range of low- and middle-income countries (LMICs).

Methods: Country teams from ten LMICs (China, Ghana, India, Iran, Kazakhstan, Laos, Mexico, Pakistan, Senegal, and Tanzania) participated in the development and testing of a questionnaire. To assess reliability we calculated the internal consistency of items within each of the ten conceptual domains related to bridging activities (specifically Cronbach's alpha). To assess face and content validity we convened several teleconferences and a workshop. To assess construct validity we calculated the correlation between scales and counts (i.e., criterion measures) for the three countries that employed both and we calculated the correlation between different but theoretically related (i.e., convergent) measures for all countries.

Results: Internal consistency (Cronbach's alpha) for sets of related items was very high, ranging from 0.89 (0.86-0.91) to 0.96 (0.95-0.97), suggesting some item redundancy. Both face and content validity were determined to be high. Assessments of construct validity using criterion-related measures showed statistically significant associations for related measures (with gammas ranging from 0.36 to 0.73). Assessments using convergent measures also showed significant associations (with gammas ranging from 0.30 to 0.50).

Conclusions: While no direct comparison can be made to a comparable questionnaire, our findings do suggest a number of strengths of the questionnaire but also the need to reduce item redundancy and to test its capacity to monitor changes over time.

Lavis, J.N., et al., 2010. Bridging the Gaps Between Research, Policy and Practice in Low- and Middle-Income Countries: a Survey of Researchers. Canadian Medical Association Journal , 182 (9) , pp. E350 - E361. PDFAbstract

Background: Many international statements have urged researchers, policy-makers and health care providers to collaborate in efforts to bridge the gaps between research, policy and practice in low- and middle-income countries. We surveyed researchers in 10 countries about their involvement in such efforts.

Methods: We surveyed 308 researchers who conducted research on one of four clinical areas relevant to the Millennium Development Goals (prevention of malaria, care of women seeking contraception, care of children with diarrhea and care of patients with tuberculosis) in each of 10 low- and middle-income countries (China, Ghana, India, Iran, Kazakhstan, Laos, Mexico, Pakistan, Senegal and Tanzania). We focused on their engagement in three promising bridging activities and examined system-level, organizational and individual correlates of these activities.

Results: Less than half of the researchers surveyed reported that they engaged in one or more of the three promising bridging activities: 27% provided systematic reviews of the research literature to their target audiences, 40% provided access to a searchable database of research products on their topic, and 43% established or maintained long-term partnerships related to their topic with representatives of the target audience. Three factors emerged as statistically significant predictors of respondents’ engagement in these activities: the existence of structures and processes to link researchers and their target audiences predicted both the provision of access to a database (odds ratio [OR] 2.62, 95% CI 1.30–5.27) and the establishment or maintenance of partnerships (OR 2.65, 95% CI 1.25–5.64); stability in their contacts predicted the provision of systematic reviews (OR 2.88, 95% CI 1.35–6.13); and having managers and public (government) policy-makers among their target audiences predicted the provision of both systematic reviews (OR 4.57, 95% CI 1.78–11.72) and access to a database (OR 2.55, 95% CI 1.20–5.43).

Interpretation: Our findings suggest potential areas for improvement in light of the bridging strategies targeted at health care providers that have been found to be effective in some contexts and the factors that appear to increase the prospects for using research in policy-making.

Sossin, L. & Hoffman, S.J., 2010. Evaluating the Impact of Remedial Authority: Adjudicative Tribunals in the Health Sector. In K. Roach & R. J. Sharpe, ed. Taking Remedies Seriously. Montreal. Montreal: Canadian Institute for Administration of Justice, pp. 521-548. Publisher's VersionAbstract
Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.
Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.

Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.

Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.

Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.Evaluating the success of adjudicative tribunals is an important but elusive undertaking. Adjudicative tribunals are created by governments and given statutory authority by legislatures for a host of reasons. These reasons may and often do include legal aspects, policy aspects and partisan aspects. While such tribunals are increasingly being asked by governments to be accountable, too often this devolves into publishing statistics on their caseload, dispositions, budgets and staffing. We are interested in a different and more basic question - are these tribunals successful? How do we know, for example, whether the remedies ordered by a tribunal actually do advance the purposes for which it was created? Can the success of an adjudicative tribunal be subject to meaningful empirical validation? While issues of evaluation and accountability cut across national and jurisdictional boundaries, the authors argue that this type of question can only be addressed empirically, by actually looking to the practice of a particular board or boards, in the context of a particular statute or statutes, and in particular jurisdictions at particular times. Such accounts can and should form the basis for comparative study. Only through comparative study can the value and limitations of particular methodologies become apparent. This study takes as its case study the role of adjudicative tribunals in the health system. The authors draw primarily from Canadian tribunal experience, though examples from other jurisdictions are used to demonstrate the potential of empirical evaluation. The authors discuss the relative dearth of empirical study in administrative law and argue that it ought to be the focus of the discussion on accountability in administrative justice.

PDF
Hoffman, S.J. ed., 2010. Student Voices: Advocating for Global Health through Evidence, Insight and Action, Hamilton, Ontario, Canada. PDF

Pages