Generative AI in Public Policy Research

Artificial Intelligence for Scientific Research

The integration of generative AI systems into research routines expands analytical capacity for reading, synthesis, and prototyping, but its legitimacy depends on a foundation of clear ethical principles. Among the most recognized frameworks are the UNESCO Recommendation on the Ethics of Artificial Intelligence, which emphasizes human rights, transparency, fairness, and impact assessment, and the OECD Principles on Responsible AI, which reinforce democratic values and robust governance throughout the AI lifecycle (UNESCO, 2021; OECD, 2019). For researchers, these frameworks imply: proportional and explainable use of models; thorough documentation of data, prompts, and versions; mechanisms for contestability; and attention to sustainability and non-discrimination.

In the editorial domain, there is growing convergence around guidelines that prohibit attributing authorship to AI tools and require transparency about their use. COPE maintains that chatbots do not meet authorship criteria and that their use must be declared; the ICMJE (Jan. 2024) has incorporated guidance on how to acknowledge AI assistance by authors, reviewers, and editors; editorial policies from the Nature group and Elsevier require explicit disclosure, human accountability, and careful review of potential biases and factual errors (COPE, 2023; ICMJE, 2024; NATURE, 2024; ELSEVIER, 2024).

In practical terms, researchers should: (i) disclose the tasks performed with AI (e.g., initial drafting, style checking, generation of auxiliary code); (ii) verify factual accuracy and citations against primary sources; and (iii) assume full and final responsibility for the integrity of the manuscript’s content.

Artificial Intelligence for Scientific Research

The integration of generative AI systems into research routines expands analytical capacity for reading, synthesis, and prototyping, but its legitimacy depends on a foundation of clear ethical principles. Among the most recognized frameworks are the UNESCO Recommendation on the Ethics of Artificial Intelligence, which emphasizes human rights, transparency, fairness, and impact assessment, and the OECD Principles on Responsible AI, which reinforce democratic values and robust governance throughout the AI lifecycle (UNESCO, 2021; OECD, 2019). For researchers, these frameworks imply: proportional and explainable use of models; thorough documentation of data, prompts, and versions; mechanisms for contestability; and attention to sustainability and non-discrimination.

In the editorial domain, there is growing convergence around guidelines that prohibit attributing authorship to AI tools and require transparency about their use. COPE maintains that chatbots do not meet authorship criteria and that their use must be declared; the ICMJE (Jan. 2024) has incorporated guidance on how to acknowledge AI assistance by authors, reviewers, and editors; editorial policies from the Nature group and Elsevier require explicit disclosure, human accountability, and careful review of potential biases and factual errors (COPE, 2023; ICMJE, 2024; NATURE, 2024; ELSEVIER, 2024).

In practical terms, researchers should: (i) disclose the tasks performed with AI (e.g., initial drafting, style checking, generation of auxiliary code); (ii) verify factual accuracy and citations against primary sources; and (iii) assume full and final responsibility for the integrity of the manuscript’s content.

The most widely discussed ethical risks include data privacy and confidentiality (especially when files are uploaded for analysis), hallucinations and fabricated references, model-propagated biases, insufficient traceability of sources, and the impact on reproducibility. Recent editorial policies remind authors that AI outputs may appear authoritative yet contain inaccuracies; it is the author’s duty to verify and properly cite the literature—never the AI itself—as a source (ICMJE, 2024; Nature, 2025). Recommended mitigation measures include: using only anonymized or authorized data; opting for enterprise environments with data safeguards when necessary; requiring search and citation logs; recording versions and prompts; and submitting generated excerpts to human peer review and similarity-checking tools.

Among popular tools, ChatGPT stands out for its data-analysis capabilities with file uploads, table and chart generation, and its ability to perform exploratory data analysis within the environment (“Advanced Data Analysis”). In 2025, OpenAI also introduced enhancements to Search within ChatGPT and announced Pulse, an experience that conducts proactive research and delivers updates based on user preferences—useful for preliminary literature screening and hypothesis scouting, which must later be verified in primary sources (OPENAI, 2025). In scientific research, ethical uses include drafting protocols and checklists, outlining analysis plans, generating annotated code, and suggesting argumentative structures, always with human oversight and citations to original sources (never to the model itself).

Perplexity was designed with an emphasis on search with citations, and more recently introduced a Deep Research mode that performs multiple queries, reads hundreds of sources, and produces reports with full references, enabling traceability of the information path—valuable for rapid reviews and exploratory mappings. The platform also offers a Pro/Research mode and a library for organizing findings (PERPLEXITY, 2025). Responsible use entails: (i) inspecting all references provided; (ii) excluding unreliable websites; (iii) grounding conclusions in peer-reviewed articles and official documents; and (iv) documenting all filters and eligibility decisions applied by both the system and the researcher.

Gemini integrates natively into the Google ecosystem, featuring multimodal chat, Deep Research, and incorporation into Workspace (Gmail, Docs, Sheets, Drive), including enterprise functionalities with expanded rollout in 2025. For research, this means workflow integration across documents and spreadsheets, PDF summarization, guided notes (NotebookLM/Audio Overviews), and context-aware collaboration—useful for managing literature reviews and experimental data (GOOGLE, 2024–2025). From an ethical standpoint, it is essential to configure institutional data policies (e.g., disabling training outside contractual scopes) and maintain reference repositories with stable document versions.

Grok (xAI) distinguishes itself through Live Search and integration with real-time data from the X platform (formerly Twitter) and web sources, claiming strong real-time retrieval capabilities. In scientific contexts, it may be useful for trend monitoring, identifying preprints mentioned on social media, and tracking emerging public policy debates—provided that credibility criteria and triangulation with official or peer-reviewed sources are applied. Because social networks contain noise and misinformation, researchers should define source-eligibility rules and maintain logs of queries and results (xAI, 2025).

A prudent protocol for using generative AI in research can be summarized as follows:

  1. Clarity of purpose and alignment with UNESCO/OECD ethical frameworks;
  2. No AI authorship and explicit disclosure of use in accordance with COPE/ICMJE/editorial policies;
  3. Data protection and risk assessment before any upload;
  4. Source traceability (logs, links, snapshots, DOIs) and manual verification;
  5. Reproducibility (saving prompts, versions, seeds, and scripts); and
  6. Critical evaluation of bias and societal impact.

Tools such as ChatGPT, Perplexity, Gemini, and Grok can accelerate parts of the research cycle, but they must serve scientific integrity—never replace it.

References

COPE – Committee on Publication Ethics. Authorship and AI tools. Disponível em: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools. Acesso em: 30 set. 2025.
ELSEVIER. The use of generative AI and AI-assisted technologies in writing for Elsevier. Disponível em: https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier. Acesso em: 30 set. 2025.
GOOGLE. The next chapter of our Gemini era. Disponível em: https://blog.google/technology/ai/google-gemini-update-sundar-pichai-2024/. Acesso em: 30 set. 2025.
GOOGLE. Gemini AI features now included in Google Workspace subscriptions. Disponível em: https://support.google.com/a/answer/15756885?hl=en. Acesso em: 30 set. 2025.
NATURE PORTFOLIO. Artificial Intelligence (AI) – Editorial policies. Disponível em: https://www.nature.com/nature-portfolio/editorial-policies/ai. Acesso em: 30 set. 2025.
ICMJE – International Committee of Medical Journal Editors. Up-dated ICMJE Recommendations (January 2024). Disponível em: https://www.icmje.org/news-and-editorials/updated_recommendations_jan2024.html. Acesso em: 30 set. 2025.
OECD. Recommendation of the Council on Artificial Intelligence (AI Principles). Disponível em: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449. Acesso em: 30 set. 2025.
OPENAI. Data analysis with ChatGPT. Disponível em: https://help.openai.com/en/articles/8437071-data-analysis-with-chatgpt. Acesso em: 30 set. 2025.
OPENAI. ChatGPT — Release Notes. Disponível em: https://help.openai.com/en/articles/6825453-chatgpt-release-notes. Acesso em: 30 set. 2025.
OPENAI. Introducing ChatGPT Pulse. Disponível em: https://openai.com/index/introducing-chatgpt-pulse/. Acesso em: 30 set. 2025.
PERPLEXITY AI. Introducing Perplexity Deep Research. Disponível em: https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research. Acesso em: 30 set. 2025.
PERPLEXITY AI. Getting started. Disponível em: https://www.perplexity.ai/hub/getting-started. Acesso em: 30 set. 2025.
UNESCO. Recommendation on the Ethics of Artificial Intelligence. Disponível em: https://unesdoc.unesco.org/ark:/48223/pf0000386510. Acesso em: 30 set. 2025.
xAI. Grok. Disponível em: https://x.ai/grok. Acesso em: 30 set. 2025.
xAI. Live Search – API Guide. Disponível em: https://docs.x.ai/docs/guides/live-search. Acesso em: 30 set. 2025.

The most widely discussed ethical risks include data privacy and confidentiality (especially when files are uploaded for analysis), hallucinations and fabricated references, model-propagated biases, insufficient traceability of sources, and the impact on reproducibility. Recent editorial policies remind authors that AI outputs may appear authoritative yet contain inaccuracies; it is the author’s duty to verify and properly cite the literature—never the AI itself—as a source (ICMJE, 2024; Nature, 2025). Recommended mitigation measures include: using only anonymized or authorized data; opting for enterprise environments with data safeguards when necessary; requiring search and citation logs; recording versions and prompts; and submitting generated excerpts to human peer review and similarity-checking tools.

Among popular tools, ChatGPT stands out for its data-analysis capabilities with file uploads, table and chart generation, and its ability to perform exploratory data analysis within the environment (“Advanced Data Analysis”). In 2025, OpenAI also introduced enhancements to Search within ChatGPT and announced Pulse, an experience that conducts proactive research and delivers updates based on user preferences—useful for preliminary literature screening and hypothesis scouting, which must later be verified in primary sources (OPENAI, 2025). In scientific research, ethical uses include drafting protocols and checklists, outlining analysis plans, generating annotated code, and suggesting argumentative structures, always with human oversight and citations to original sources (never to the model itself).

Perplexity was designed with an emphasis on search with citations, and more recently introduced a Deep Research mode that performs multiple queries, reads hundreds of sources, and produces reports with full references, enabling traceability of the information path—valuable for rapid reviews and exploratory mappings. The platform also offers a Pro/Research mode and a library for organizing findings (PERPLEXITY, 2025). Responsible use entails: (i) inspecting all references provided; (ii) excluding unreliable websites; (iii) grounding conclusions in peer-reviewed articles and official documents; and (iv) documenting all filters and eligibility decisions applied by both the system and the researcher.

Gemini integrates natively into the Google ecosystem, featuring multimodal chat, Deep Research, and incorporation into Workspace (Gmail, Docs, Sheets, Drive), including enterprise functionalities with expanded rollout in 2025. For research, this means workflow integration across documents and spreadsheets, PDF summarization, guided notes (NotebookLM/Audio Overviews), and context-aware collaboration—useful for managing literature reviews and experimental data (GOOGLE, 2024–2025). From an ethical standpoint, it is essential to configure institutional data policies (e.g., disabling training outside contractual scopes) and maintain reference repositories with stable document versions.

Grok (xAI) distinguishes itself through Live Search and integration with real-time data from the X platform (formerly Twitter) and web sources, claiming strong real-time retrieval capabilities. In scientific contexts, it may be useful for trend monitoring, identifying preprints mentioned on social media, and tracking emerging public policy debates—provided that credibility criteria and triangulation with official or peer-reviewed sources are applied. Because social networks contain noise and misinformation, researchers should define source-eligibility rules and maintain logs of queries and results (xAI, 2025).

A prudent protocol for using generative AI in research can be summarized as follows:

  1. Clarity of purpose and alignment with UNESCO/OECD ethical frameworks;
  2. No AI authorship and explicit disclosure of use in accordance with COPE/ICMJE/editorial policies;
  3. Data protection and risk assessment before any upload;
  4. Source traceability (logs, links, snapshots, DOIs) and manual verification;
  5. Reproducibility (saving prompts, versions, seeds, and scripts); and
  6. Critical evaluation of bias and societal impact.

Tools such as ChatGPT, Perplexity, Gemini, and Grok can accelerate parts of the research cycle, but they must serve scientific integrity—never replace it.

References

COPE – Committee on Publication Ethics. Authorship and AI tools. Disponível em: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools. Acesso em: 30 set. 2025.
ELSEVIER. The use of generative AI and AI-assisted technologies in writing for Elsevier. Disponível em: https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier. Acesso em: 30 set. 2025.
GOOGLE. The next chapter of our Gemini era. Disponível em: https://blog.google/technology/ai/google-gemini-update-sundar-pichai-2024/. Acesso em: 30 set. 2025.
GOOGLE. Gemini AI features now included in Google Workspace subscriptions. Disponível em: https://support.google.com/a/answer/15756885?hl=en. Acesso em: 30 set. 2025.
NATURE PORTFOLIO. Artificial Intelligence (AI) – Editorial policies. Disponível em: https://www.nature.com/nature-portfolio/editorial-policies/ai. Acesso em: 30 set. 2025.
ICMJE – International Committee of Medical Journal Editors. Up-dated ICMJE Recommendations (January 2024). Disponível em: https://www.icmje.org/news-and-editorials/updated_recommendations_jan2024.html. Acesso em: 30 set. 2025.
OECD. Recommendation of the Council on Artificial Intelligence (AI Principles). Disponível em: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449. Acesso em: 30 set. 2025.
OPENAI. Data analysis with ChatGPT. Disponível em: https://help.openai.com/en/articles/8437071-data-analysis-with-chatgpt. Acesso em: 30 set. 2025.
OPENAI. ChatGPT — Release Notes. Disponível em: https://help.openai.com/en/articles/6825453-chatgpt-release-notes. Acesso em: 30 set. 2025.
OPENAI. Introducing ChatGPT Pulse. Disponível em: https://openai.com/index/introducing-chatgpt-pulse/. Acesso em: 30 set. 2025.
PERPLEXITY AI. Introducing Perplexity Deep Research. Disponível em: https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research. Acesso em: 30 set. 2025.
PERPLEXITY AI. Getting started. Disponível em: https://www.perplexity.ai/hub/getting-started. Acesso em: 30 set. 2025.
UNESCO. Recommendation on the Ethics of Artificial Intelligence. Disponível em: https://unesdoc.unesco.org/ark:/48223/pf0000386510. Acesso em: 30 set. 2025.
xAI. Grok. Disponível em: https://x.ai/grok. Acesso em: 30 set. 2025.
xAI. Live Search – API Guide. Disponível em: https://docs.x.ai/docs/guides/live-search. Acesso em: 30 set. 2025.