Skip to content

Bridging the Gaps: How Language Models Can Connect Ethics, Science, and Policy

What this post explores:

  • The disconnect between philosophy, empirical research, and policy in practical ethics
  • Initial experiments using language models to bridge these gaps
  • Case study: Using Claude to extract empirical claims and policy implications from my own co-authored paper
  • Limitations of this approach and questions for further exploration


At its best, practical ethics addresses normative questions with philosophical rigor while remaining grounded in empirical evidence and offering meaningful input for policy. However, the field frequently faces challenges in demonstrating clear connections between normative analysis, empirical and policy research.  

The Structural Disconnect

The challenge is partly due to the incentives of academic publishing:

  • Applied ethics journals typically prioritize philosophical analysis, creating disincentives to rigorously articulate the link between a normative argument and the specific, testable empirical questions that could validate or challenge the ethical debate
  • Detailed empirical roadmaps that articulate how new information would affect the ethical debate, or policy frameworks that specify what studies could be conducted and which experts should collaborate, would make papers unwieldy for their primary readership
  • Ethics journal readers typically care more about normative reasoning than precise details of what studies or policy pathways would connect to these arguments

This disciplinary siloing limits prospective ethical insights from meaningfully informing empirical research priorities or shaping policy decisions.

The result is practical ethics papers typically conclude with vague calls for “future research” that gesture loosely toward empirical gaps. Policy implications—if mentioned at all—lack the detailed implementation frameworks needed to resonate with decision-makers. (For some quick, indicative evidence, take a look at this AI-generated review summarizing 43 papers from the Journal of Practical Ethics between 2013-2020 – most follow the trend just described.)

Bridging the Gap

This represents a missed opportunity. Imagine practical ethics papers with:

·       Supplementary sections that systematically outline empirical studies needed to address key normative questions in the paper—specifying testable hypotheses, methodologies, and relevant disciplines that can move the debate forward

·       Policy-oriented appendices that explicitly and rigorously connect a paper’s ethical arguments with existing policy frameworks in specific jurisdictions

These additions could help practical ethics move beyond occasional inter-disciplinary influence to become an effective bridge between theory and application, ultimately creating clearer pathways for collaboration across disciplinary boundaries.

Language Models as Interdisciplinary Bridges

This is where language models offer intriguing possibilities. These systems excel at recognizing patterns across knowledge domains and reframing information to fit specific formats. They could function as tools that connect different academic fields by creating standardized formats for translating between ethical arguments, research questions, and policy ideas.

The idea is that by creating carefully designed prompts that encode expertise from scholars who excel at bridging philosophical, empirical and policy work, language models could help establish better standards for connecting theory to practice. Instead of vague calls for “future empirical work,” these tools could systematically extract empirical assumptions and policy implications from ethics papers and generate concrete research questions and policy frameworks.

Example: From Normative Claims to Empirical Roadmaps

Let me demonstrate this approach with a practical example, focusing on empirical bridging specifically. I used Anthropic’s Claude Projects—essentially Claude with specific instructions—to analyze this open access paper I co-authored called “Which AI doctor would you like to see? Emulating healthcare provider–patient communication models with GPT-4: proof-of-concept and ethical exploration”

The method involves instructing Claude to analyze ethics papers through a consistent template that identifies empirical claims, distinguishes between load-bearing and secondary empirical claims and assumptions within the paper, and articulate specific empirical research hypotheses that could support or weaken the ethical argument.

The paper in this example argues that language models could empower patients to choose their preferred medical communication style, potentially enhancing autonomy and improving healthcare outcomes. When Claude analyzed it using this prompt, it generated this the structured report below (some of it quite brutal in its assessment of the paper’s empirical claims).


Empirical Claims Analysis: Which AI doctor would you like to see?

Core Argument Structure

The paper explores using Large Language Models (LLMs) to provide patients with choice in healthcare communication styles. The main ethical conclusion suggests that LLMs could empower patients by allowing them to select their preferred communication approach when discussing medical cases.

Key steps in the argument:

  • LLMs show promise in enhancing healthcare provider-patient communication
  • There’s a longstanding debate about optimal communication styles between healthcare providers and patients – Emanuel and Emanuel’s four models (paternalistic, informative, interpretive, and deliberative) offer distinct approaches to medical communication – No single communication style suits all patients and situations
  • LLMs can be programmed to emulate these different communication styles – This could allow patients to select their preferred communication style, enhancing autonomy
  • The approach carries potential benefits and risks requiring further investigation

The scope focuses on LLMs in healthcare communication, particularly in enhancing patient autonomy and shared decision-making through personalized communication styles.

Empirical Claims Identification and Prioritization
Load-bearing empirical claims
  • LLMs can effectively emulate different healthcare provider-patient communication styles
    • This claim underpins the entire proposal – if LLMs cannot convincingly adopt these distinct communication approaches, the application is not viable
    • The authors provide a proof-of-concept with GPT-4, but acknowledge it lacks validation with actual patients
    • If false, the core premise of using LLMs to offer communication style choice collapses
  • Patients have varying preferences for communication styles in healthcare interactions
    • This claim justifies the need for a solution allowing patient choice in communication style
    • The authors cite research indicating patients differ in desired relationships with healthcare providers
    • If false (i.e., if a single communication style were optimal for all patients), the proposed application would be unnecessary
  • Patients would benefit from having the ability to choose their preferred communication style
    • This provides the ethical justification for the proposed LLM application
    • The authors argue that patient autonomy requires choice in communication style
    • If false (i.e., if patient choice of communication style does not enhance outcomes or autonomy), the ethical basis for the proposal would be undermined
  • LLMs can provide more consistent and adaptable communication options than the current \”lottery\” system
    • This claim suggests the proposed LLM application would improve upon status quo, where patients are assigned providers without consideration of communication style compatibility
    • If false (i.e., if the current human provider system better serves patients despite its inconsistency), the proposed application loses its comparative advantage
Secondary empirical claims
  • LLMs have demonstrated potential to generate more empathetic responses to patient questions compared to physicians in some contexts
  • LLMs\’ multilingual capabilities could benefit global health and migrant communities
  • Voice-based LLM communication could enhance accessibility for individuals with low literacy or visual impairments
  • Patients might disclose more information to an AI system than to human providers due to perceived lack of judgment
  • LLMs could serve as training tools for healthcare providers to improve their communication skills
  • LLMs can be customized to create novel communication styles beyond the four established models
Current Evidence Assessment

For Claim 1: LLMs can effectively emulate different communication styles
  • Evidence quality: Weak/Theoretical
  • The authors provide a proof-of-concept using GPT-4, but this lacks validation with actual patients
  • Major uncertainties include whether patients would perceive these emulated styles as intended and effective
  • Methodological challenges include LLM response variability and the complexity of real-world provider-patient interactions
For Claim 2: Patients have varying preferences for communication styles
  • Evidence quality: Moderate
  • The authors cite existing research on variable patient preferences but don’t provide a comprehensive literature review
  • Uncertainties include preference distribution across different populations and factors influencing these preferences
  • Competing interpretations exist regarding how much these preferences impact healthcare outcomes
For Claim 3: Patients would benefit from choosing their preferred communication style
  • Evidence quality: **Weak/Theoretical**
  • The authors argue this would enhance patient autonomy but provide no empirical evidence linking communication style choice to better outcomes
  • Uncertainties include whether patients know which style would benefit them most and how preferences might change across different medical contexts
For Claim 4: LLMs provide advantages over the current system
  • Evidence quality: Weak/Theoretical
  • The authors provide theoretical arguments about LLM benefits over the current approach without empirical comparison
  • Uncertainties include potential negative consequences of allowing communication style choice, such as reinforcing existing biases
Similar Questions in Other Debates
  1. Personalization in healthcare: Research on personalized medicine and patient-centered care could inform debates about tailoring communication styles
  2. Digital health interventions: Studies on chatbots and health apps could provide insights into technology-mediated healthcare interactions
  3. Shared decision-making: Research on decision aids and approaches could inform debates about balancing provider guidance with patient autonomy
  4. Health literacy and communication: Research on how health literacy affects understanding could inform adaptation of communication styles
  5. Human-computer interaction: Studies on how people interact with AI systems could reveal benefits and limitations of LLMs in healthcare communication
Priority Research Questions
For Claim 1: LLMs can effectively emulate different communication styles
  • Research question: Can patients accurately identify and meaningfully distinguish between communication styles used by an LLM?
  • Hypothesis: Patients will distinguish between the four communication styles when interacting with an LLM and perceive them as distinct approaches to healthcare communication
  • Existing evidence is insufficient without validation from actual patients
  • A decisive finding would show patients consistently identifying different communication styles in LLM interactions
  • Key disciplines: human-computer interaction, healthcare communication, psychology
For Claim 2: Patients have varying preferences for communication styles
  • Research question: How do communication style preferences distribute across demographic groups, medical conditions, and healthcare contexts?
  • Hypothesis: Patient preferences will vary significantly based on demographics, medical conditions, and healthcare contexts
  • Existing evidence lacks comprehensive mapping of preference patterns – A decisive finding would provide detailed distribution of preferences and influencing factors –
  • Key disciplines: healthcare communication, psychology, sociology, medical anthropology
For Claim 3: Patients would benefit from choosing their preferred communication style
  • Research question: Does allowing patients to choose their preferred communication style improve healthcare outcomes?
  • Hypothesis: Patients choosing their preferred communication style will show better understanding, higher satisfaction, greater treatment adherence, and improved health outcomes compared to those who cannot choose
  • Existing evidence lacks direct measurement of impact on outcomes
  • A decisive finding would demonstrate causal relationship between communication style choice and improved outcomes
  • Key disciplines: healthcare outcomes research, psychology, medical ethics
For Claim 4: LLMs provide advantages over the current system
  • Research question: How do LLM interactions emulating different communication styles compare to traditional provider interactions? –
  • Hypothesis: LLM interactions will result in equivalent or better outcomes and satisfaction compared to traditional provider interactions, with minimal additional risks – Existing evidence lacks direct comparison between approaches
  • A decisive finding would show comparative advantages of LLM approach with minimal risks
  • Key disciplines: healthcare outcomes research, medical ethics, AI ethics, human-computer interaction
Interim Analysis

While awaiting better evidence:

  • We can provisionally accept that patients have varying preferences for communication styles, as this has moderate supporting evidence
  • We should be most cautious about claims regarding LLM effectiveness in emulating communication styles and the impact of patient choice on outcomes
  • Decision principles should include:
    • Patient autonomy and informed consent about LLM capabilities and limitations
    • Non-maleficence in avoiding potential harms from misinformation or manipulation
    • Transparency about the experimental nature of these applications
    • Staged implementation with careful monitoring and evaluation
  • Any implementation should be presented as exploratory rather than definitively beneficial
Conclusion

The paper presents an intriguing potential application of LLMs in healthcare communication, but significant empirical research is needed to validate its key claims. Priority should be given to determining whether LLMs can effectively emulate different communication styles in ways patients find meaningful, whether communication style choice improves outcomes, and how LLM-mediated interactions compare to traditional provider interactions. Until such evidence is available, cautious exploration with rigorous evaluation is warranted, guided by principles of patient autonomy, informed consent, and non-maleficence.


Might such supplementary analyses help bridge the ethics-empirical gap? It’s too early to say, but:

The analysis certainly improves on our original paper. While we included scattered calls in the paper for further empirical research, these were vague and lacked any systematic organization.

In contrast, Claude’s analysis:

·       Extracted four specific load-bearing empirical claims in the paper (impressively so, I would say).

·       Methodically outlined precise research questions, hypotheses, and needed types of expertise

·       Created this coherent research agenda in seconds (I also tested more detailed experimental design templates but felt less confident about their reliability)

The promise? Such a low-cost supplementary addition to a paper in a medical ethics paper is at least indicative of the potential of language models to quickly transform vague research calls into clear empirical roadmaps—prioritizing testable claims, contextualizing evidence, and suggesting concrete research directions. For empirical researchers, it’s not implausible to see how this would make discovery and engagement far more likely than following the current scattered breadcrumb trails applied ethicists typically leave behind.

Example: Bridging Ethical Arguments and UK Policy

Another example I’ll highlight is a complementary Claude Project that extracts policy-relevant principles from ethics papers and maps them to the UK policy landscape.

In this project, Claude functions as a systematic translator between academic ethics papers and practical policy applications. The process, which uses this prompt, works like this:

·       First distilling the paper’s core arguments

·       Extracting 3-5 key ethical principles and methodically converting each into concrete policy applications

·       Identifying implementation considerations and key stakeholders for each principle

·       Analyzing how these principles interact with (in this case) existing UK policies, including what needs modification and ideas for new policies

·       Concluding with a strategic engagement plan with specific government contacts and communication approaches

The resulting analysis below is, as before, at least indicative of how language models appear capable of identifying policy-relevant aspects of ethical arguments and connecting them to the UK institutional landscape. It identified five core principles from the paper and developed potential policy applications, suggesting specific bodies like NHS England, the MHRA, and NICE for implementation (none of which was touched on in the paper).


Policy implications analysis: Which AI Dr would you like to see?

Summary

The paper explores how large language models (LLMs) could be deployed to enhance healthcare provider-patient communication by allowing patients to select their preferred communication style. Using ChatGPT-4, the authors demonstrate how LLMs can emulate four different healthcare communication approaches identified by Emanuel and Emanuel: paternalistic, informative, interpretive, and deliberative. Each style represents a different balance of provider authority, information provision, and consideration of patient values.

The authors argue that no single communication style is ideal for all patients and situations, and that LLMs could empower patients by offering them choice over how they receive and engage with medical information. Through proof-of-concept demonstrations, they show how LLMs can be instructed to adopt different communication approaches based on the same medical case. The paper also highlights potential risks, including the possibility that LLMs could reinforce patients’ biases or utilize their persuasive capabilities in ways that lead to manipulation rather than genuine patient empowerment.

Core Policy-Relevant Principles
 Patient autonomy and communication preferences

Principle description: Patients have varying preferences for how medical information is communicated to them, and these preferences influence their engagement, understanding, and decision-making. Respecting these preferences is an aspect of respecting patient autonomy.

Potential policy applications and implementation considerations: Policies could require healthcare systems to assess and accommodate patient communication preferences. This could involve developing standardized tools for assessing preferences, providing training for healthcare providers, and integrating AI tools that can adapt to different communication styles. However, implementation would require addressing concerns about resource constraints, potential reinforcement of biases, and ensuring human oversight.

Key stakeholders: NHS England, General Medical Council, healthcare providers, patient advocacy groups, medical education institutions, AI developers specializing in healthcare applications, and ethics committees.

AI as a complement to healthcare provider communication

Principle description: LLMs have potential to enhance rather than replace human healthcare provider communication, particularly in scenarios where providers face time constraints or where patients benefit from repeated explanations. These technologies could serve as tools for both patients and providers.

Potential policy applications and implementation considerations: Policies could define appropriate uses of LLMs in clinical settings, establish requirements for integration with electronic health records, and set standards for human oversight. Implementation challenges include ensuring data privacy, addressing liability issues, and developing mechanisms to monitor the quality and accuracy of AI-generated communications.

Key stakeholders: Department of Health and Social Care, NHS Digital, Information Commissioner’s Office, medical defense organizations, healthcare providers, health technology assessment bodies, and AI ethics experts.

Addressing risks of manipulation and bias reinforcement

Principle description: LLMs possess persuasive capabilities that could lead to unintended manipulation of patients, and allowing patients to choose their preferred communication style may reinforce existing biases rather than challenge them appropriately.

Potential policy applications and implementation considerations: Policies could require transparency in how LLMs are instructed to communicate, mandate disclosures about persuasive capabilities, and establish ethical guidelines for the deployment of LLMs in healthcare communication. Implementation would require establishing monitoring mechanisms and clear reporting structures for concerns.

Key stakeholders: National Institute for Health and Care Excellence (NICE), General Medical Council, patient safety organizations, ethics committees, and digital health regulators.

Equity and accessibility in healthcare communication

Principle description: LLMs could enhance healthcare accessibility through multilingual capabilities and voice-based communication, potentially addressing barriers faced by patients with low literacy levels, visual impairments, or language barriers.

Potential policy applications and implementation considerations: Policies could promote the development and deployment of LLMs that address accessibility needs and health inequalities. Implementation would require assessing the impact on diverse population groups, ensuring technological accessibility, and monitoring for unintended consequences.

Key stakeholders: NHS Health Inequalities Improvement teams, organizations representing marginalized communities, translation services, disability rights organizations, and technology accessibility experts.

Ongoing research and evaluation

Principle description: Empirical investigation is needed to better understand the impact of LLMs on healthcare communication, patient outcomes, and the patient-provider relationship.

Potential policy applications and implementation considerations: Policies could establish frameworks for research, fund pilots, and require ongoing evaluation of LLMs in healthcare settings. Implementation would require developing appropriate metrics, ensuring research ethics, and creating mechanisms for incorporating findings into policy revisions.

Key stakeholders: National Institute for Health Research (NIHR), university research departments, healthcare quality improvement organizations, and patient involvement networks.

 UK Policy Landscape

Relevant existing policies in the UK

NHS Long Term Plan and Digital Health Initiatives: The NHS Long Term Plan emphasizes digital transformation, including the use of AI and digital technologies to improve healthcare delivery. Current policies focus primarily on diagnostic and administrative applications rather than communication tools.

The NHS AI Lab: Established to accelerate the safe adoption of AI in health and care, the Lab oversees evaluation and implementation of AI technologies but has not specifically addressed LLMs for patient communication.

NHS Constitution: Enshrines patients’ rights to be involved in decisions about their care and to receive information in a format they can understand, which aligns with the paper’s emphasis on patient communication preferences.

Data Protection and GDPR: Current regulations govern how patient data can be used in AI systems and set requirements for privacy and security that would apply to LLMs used in healthcare.

Medical device regulation: The MHRA has established frameworks for regulating software as a medical device, which may apply to certain applications of LLMs in healthcare, though there remains ambiguity about whether communication tools would fall under this regulatory framework.

GMC guidance on consent and communication: Professional standards emphasize effective communication and shared decision-making but do not specifically address the use of AI tools in this context.

Potential UK policy modifications

Update MHRA guidance on software as a medical device: Clarify whether and how LLMs used for patient communication fall within the regulatory framework, and develop specific requirements for safety, efficacy, and transparency.

Expand NHS Digital Transformation Strategy: Incorporate specific guidance on the use of LLMs for patient communication, including standards for integration with existing systems, data security requirements, and guidance on appropriate use cases.

Modify NICE Health Technology Assessment frameworks: Develop specific methodologies for evaluating the clinical and cost-effectiveness of LLM-based communication tools, considering both immediate outcomes and longer-term impacts on patient engagement and health outcomes.

Enhance GMC guidance on digital communication: Update guidance to include considerations for the use of AI-enabled communication tools, addressing issues of professional responsibility, oversight, and integration with standard clinical practices.

Revise NHS Constitution patient information standards: Explicitly acknowledge the potential role of digital tools, including LLMs, in fulfilling obligations to provide accessible information to patients.

New UK policy proposals

National Standards for Healthcare LLMs: Develop comprehensive standards specifically for LLMs used in healthcare communication, covering accuracy, bias, transparency, data privacy, and evaluation requirements.

NHS LLM Communication Pilot Program: Establish a structured pilot program to evaluate the implementation of LLMs in various healthcare settings, with emphasis on measuring impacts on patient satisfaction, understanding, shared decision-making, and health outcomes.

Health Communication Style Assessment Framework: Create a national framework for assessing patient communication preferences and incorporating these into care pathways, with provisions for how LLMs might support this process.

Digital Health Literacy Initiative: Develop a national strategy to enhance public understanding of AI in healthcare, including capabilities and limitations of LLM-based communication tools.

Healthcare AI Ethics Observatory: Establish an independent body to monitor ethical issues arising from healthcare AI implementation, with specific attention to communication technologies and their impacts on patient autonomy and the provider-patient relationship.

 Cross-cutting considerations

Health inequalities impact assessment: Require thorough assessment of how LLM-based communication tools might address or exacerbate existing health inequalities, particularly regarding accessibility for different demographic groups.

Human-AI collaboration frameworks: Develop guidance on appropriate division of roles between human providers and AI systems in patient communication, emphasizing complementary strengths.

Privacy and data protection balancing: Address tensions between data minimization principles and the data needs of effective LLMs, developing frameworks that protect privacy while enabling beneficial applications.

Workforce implications: Consider how the introduction of LLM-based communication tools might affect healthcare workforce roles, training needs, and professional development requirements.

Cost-effectiveness thresholds: Develop specific approaches to evaluating the cost-effectiveness of LLM communication tools, considering both direct costs and potential downstream impacts on healthcare resource utilization.

Engagement Approach
Recommended first UK points of contact
  • NHS Transformation Directorate: As the body responsible for digital transformation in the NHS, they should be engaged early regarding potential implementation pathways.
  • MHRA Digital Health Unit: To address regulatory considerations for LLM-based communication tools.
  • NICE Office for Digital Health: To discuss evaluation frameworks for assessing clinical and cost-effectiveness.
  • NHS AI Lab: To explore how LLM communication tools align with broader AI strategy in healthcare.
  • Healthwatch England: To ensure patient perspectives are central to policy development.
  • Academy of Medical Royal Colleges: To engage clinical leadership across specialties.
  • Patient Information Forum: To leverage expertise in health information standards.
Suggested communication strategy
  • Evidence-based messaging: Center communication on the current evidence regarding LLMs in healthcare, acknowledging both potential benefits and limitations.
  • Patient-centered framing: Position LLM-based communication tools as enhancing, not replacing, human interaction and respecting diverse patient needs.
  • Phased approach: Communicate a staged implementation plan beginning with low-risk use cases and robust evaluation.
  • Stakeholder mapping: Identify supporters and potential critics, tailoring messaging to address specific concerns of different groups.
  • Transparency about risks: Openly acknowledge identified risks and demonstrate how policy proposals address these concerns.
  • Plain language summaries: Develop accessible explanations of complex technical and ethical concepts for diverse audiences.
  • Case studies and scenarios: Use concrete examples to illustrate potential applications and safeguards.
UK Policy networks, forums and events to leverage
  • NHS ConfedExpo: Annual conference bringing together leaders from across the healthcare system.
  • The King’s Fund Digital Health and Care Congress: Forum focusing on digital innovation in healthcare.
  • NHS Clinical Commissioners networks: To engage with commissioning perspectives.
  • Faculty of Clinical Informatics: For expertise on clinical data and technology integration.
  • Health Foundation policy events: To connect with policy experts focused on quality improvement.
  • All-Party Parliamentary Group on Artificial Intelligence: To engage with the legislative perspective.
  • Nuffield Council on Bioethics events: For ethical dimensions of implementation.
  • NHS Providers Digital Boards Programme: To engage trust leadership on governance considerations.
  • Health Innovation Network: To leverage innovation diffusion expertise.
  • Patient participation groups and networks: To ensure diverse patient voices are included.
Executive summary
Key findings from the paper

This paper demonstrates how large language models (LLMs) can be instructed to emulate different healthcare communication styles, potentially allowing patients to engage with medical information in ways that match their preferences. The authors show that LLMs like ChatGPT-4 can effectively adopt paternalistic, informative, interpretive, or deliberative approaches when discussing the same medical case. This capability could address a longstanding challenge in healthcare: the mismatch between how information is communicated to patients and how they would prefer to receive it.

Most promising policy applications

The most promising policy applications involve establishing frameworks for responsible integration of LLMs as complementary tools in healthcare communication. This includes developing standards for accuracy, transparency, and appropriate use; creating evaluation methodologies to assess impact on patient understanding and outcomes; and ensuring equitable access to these technologies across diverse patient populations. Policies should balance innovation with appropriate safeguards against identified risks, particularly regarding potentially manipulative persuasion and reinforcement of biases.

Top 3 recommendations
  1. Establish a national pilot program to evaluate LLM-based communication tools in controlled healthcare settings, with robust assessment of impacts on patient satisfaction, understanding, and outcomes.
  2. Develop comprehensive standards specifically for healthcare communication LLMs, covering accuracy, transparency, bias mitigation, and ethical use.
  3. Create a framework for assessing and documenting patient communication preferences in healthcare records, with guidance on how these preferences can be supported by both human providers and LLM-based tools.
Critical stakeholders

The successful implementation of LLM-based communication tools in healthcare requires engagement with diverse stakeholders. These include NHS leadership responsible for digital transformation; regulatory bodies such as the MHRA and ICO; professional organizations representing healthcare providers; patient advocacy groups, particularly those representing marginalized communities; AI ethics experts; and health technology assessment bodies. Effective engagement should emphasize the complementary role of these technologies in enhancing, not replacing, human healthcare communication, while acknowledging concerns about safety, efficacy, and ethical implementation.


I should add a significant caveat: without the relevant policy expertise – which I lack – it’s difficult to verify the utility of this output. I take this uncertainty as a clear interdisciplinary opportunity—working with policy experts to craft better prompts that generate more relevantly outputs for policy-oriented researchers. Just as with the empirical research example, these policy translations could be significantly enhanced through collaboration with specialists who understand what policy researchers and policy makers actually benefit from when it comes to normative analyses.

Limitations and Future Directions

While these examples are meant as early explorations, I’d say they hint at the potential for concrete improvements over the current haphazard approaches that leave interdisciplinary linkages implicit and underdeveloped.

Of course, AI generated analyses come with their own risks: they may encode and entrench existing disciplinary biases, create illusions of comprehensiveness where gaps remain, or present outputs whose accuracy and utility are difficult to verify without domain expertise.

There are doubtless other considerations too—how might these tools affect interdisciplinary collaboration, academic attribution, and power dynamics in knowledge production?

But as an initial next step, it could be fruitful to first engage directly with a range of scientists and policy-oriented researchers to try to determine whether these outputs could genuinely advance interdisciplinary collaboration, or simply add another layer of abstraction between fields that already struggle to communicate effectively!

Share on

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.