{"id":20601,"date":"2025-07-21T11:56:00","date_gmt":"2025-07-21T15:56:00","guid":{"rendered":"https:\/\/web.uri.edu\/cels\/?p=20601"},"modified":"2025-07-30T12:03:33","modified_gmt":"2025-07-30T16:03:33","slug":"leveraging-technological-equity-ai","status":"publish","type":"post","link":"https:\/\/web.uri.edu\/cels\/news\/leveraging-technological-equity-ai\/","title":{"rendered":"Leveraging Technological Equity: URI&#8217;s Yoshitaka Ota on AI in Global Marine Policy"},"content":{"rendered":"\n<p>Artificial intelligence (AI) has been experiencing a massive boom in its incorporation into personal, professional, and academic spaces. It is increasingly being used in the United Nation\u2019s global negotiations for summarizing information, providing policy suggestions, and answering complex questions related to global policy. While there are numerous opportunities for its application, it can also have unintended negative consequences due to inherent biases and unequal distribution of resources among nation-states.&nbsp;<\/p>\n\n\n\n<p>University of Rhode Island\u2019s Professor of Marine Affairs <a href=\"https:\/\/web.uri.edu\/maf\/meet\/yoshitaka-ota\/\">Yoshitaka Ota<\/a> has been working with colleagues to closely examine pitfalls in AI within marine policy in particular. He and his team had their research <a href=\"https:\/\/www.nature.com\/articles\/s44183-025-00132-7#Abs1\">recently published<\/a> in the international peer-reviewed journal <em>npj Ocean Sustainability. <\/em>The article presents a case study of an AI chatbot that the team developed \u2013 the Experimental BBNJ Question-Answering Bot \u2013 to explore the opportunities, limitations, and risks that artificial intelligence language learning models (LLMs) present in the global ocean policy space.<\/p>\n\n\n\n<p><strong>Artificial Intelligence and Ocean Policy<\/strong><\/p>\n\n\n\n<p>While artificial intelligence is increasingly being used in ocean policymaking, disparities exist that put developing nations at a disadvantage. While there is potential for LLMs to aid researchers and diplomats in UN negotiations, there also exists a concern about biases that may exist within AI and its means of obtaining or disseminating information. In the article, the authors mention a growing amount of research exposing inherent biases in many AI models which arise from the training and design fed to them. Such models are prone to reproducing harmful stereotypes, leading to discrimination in job hiring systems, advertisements, and even criminal sentencing (Ziegler et al., 2025).&nbsp;<\/p>\n\n\n\n<p>The authors also point out the ways in which misplaced trust in LLMs can have negative impacts and further influence bias in both the AI systems and the policymakers who become dependent on them. Over-reliance on AI technology, the authors note, may lead to a form of \u201cautomation bias\u201d in which people defer to AI systems over more reputable sources. Confirmation bias is another concern, in which individuals may trust the responses of an AI system instead of fact-checking. This can also look like individuals favoring LLM responses that align with their own beliefs rather than incorporating information and perspectives from those that differ from their own. This over-reliance and misplaced trust can have disastrous consequences by reproducing and repeating unjust, biased, or discriminatory language. These concerns are what drove the team to develop the BBNJ Question-Answering Bot in order to examine how LLMs like ChatGPT could influence ocean policy negotiations.<\/p>\n\n\n\n<p>The team\u2019s experimental chatbot was named for the recently adopted UN agreement on marine conservation and \u201cbiological diversity of areas beyond national jurisdiction,\u201d or the BBNJ Agreement. This agreement was selected for testing by the bot because of its lengthy and complicated period of negotiations which reflected patterns of inequity between developed and developing nation-states. Thus, the experimental chatbot gave researchers an opportunity to test what kinds of responses the bot would provide when prompted and the ways that implicit biases could be introduced in its responses. An example of this is when the BBNJ Question-Answering Bot was asked about how the BBNJ Agreement could impact human rights abuses for countries like Thailand and the U.S. The <a href=\"https:\/\/www.nature.com\/articles\/s44183-025-00132-7\/figures\/2\">article includes a figure <\/a>with both the questions researchers asked and the chatbot\u2019s responses, illustrating how certain biases that may favor or disfavor a certain country, belief, or way of thinking can influence not only the training behind chatbot models, but also the responses it provides.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"238\" src=\"https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy.jpg\" alt=\"\" class=\"wp-image-20602\" style=\"width:588px;height:auto\" srcset=\"https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy.jpg 800w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy-300x89.jpg 300w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy-768x228.jpg 768w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy-364x108.jpg 364w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/44183_2025_132_Fig5_HTML-copy-500x149.jpg 500w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\">Simplified figure outlining how the BBNJ Question-Answering Bot functions<\/figcaption><\/figure>\n<\/div>\n\n\n<p><strong>Four Pillars of Ocean Governance<\/strong><\/p>\n\n\n\n<p>Ocean governance refers to how the world\u2019s oceans and its resources are managed. \u201cOne side of it is how to govern the ocean: the structure of the governments, laws, and systems that govern,&#8221; Ota says. &#8220;Part of this side is understanding the system: how the law is deployed, what resources are given, etc. The other side is the accountability, legitimacy, transparency, and responsibility of how we are governing oceans. This side centers on assessing if the way we\u2019re doing this is legitimate and accountable, whether people are actually taking responsibility.\u201d&nbsp;<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"700\" height=\"700\" src=\"https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy.jpg\" alt=\"\" class=\"wp-image-20603\" style=\"width:308px;height:auto\" srcset=\"https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy.jpg 700w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy-300x300.jpg 300w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy-150x150.jpg 150w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy-364x364.jpg 364w, https:\/\/web.uri.edu\/cels\/wp-content\/uploads\/sites\/2130\/Matt-Ziegler-copy-500x500.jpg 500w\" sizes=\"auto, (max-width: 700px) 100vw, 700px\" \/><figcaption class=\"wp-element-caption\">Matt Ziegler, Ocean Nexus Innovation Fellow and Postdoctoral researcher, UW Allen School of Computer Science and Engineering<\/figcaption><\/figure>\n<\/div>\n\n\n<p>Bridging technology and equity in global ocean governance is the central focus of the team\u2019s project. Matt Ziegler, Ocean Nexus Innovation Fellow, shared how AI is being incorporated in ocean governance talks and the risks it can pose to less-resourced nations. \u201cThere\u2019s a lot more uses of AI being proposed in ocean governance, such as modeling that comes up for estimating things like fish populations in future years, and proposing designs for protected areas,\u201d he says. \u201cIt\u2019s essentially being used by countries bringing in a lot of resources, so we are hoping that this paper will give the less-resourced countries a way to question that, and show the real risks of introducing the same kinds of human biases and putting those lower-resourced countries at a disadvantage.\u201d<\/p>\n\n\n\n<p><strong>AI and Ocean Governance: How the Project Came to Be<\/strong><\/p>\n\n\n\n<p>Ota is driven by collaboration, accountability, and responsibility in global ocean governance, which spurred the development of <a href=\"https:\/\/oceannexus.org\/network\/\">Ocean Nexus<\/a>, an ocean research institute that brings scholars together from across the globe. He was interested in investigating how technology could be used for ocean governance in terms of increasing transparency and accountability and began collaborating with Ziegler, who was working on projects related to development of technology such as artificial intelligence models. In the paper, the authors write: \u201cLLMs are already having an impact on marine policymaking processes, despite their risks being poorly understood. A number of this paper\u2019s authors have already observed State representatives and delegates using ChatGPT at the UN for purposes including the drafting of interventions, statements, submissions, and biographies; asking it questions to conduct background research; and even generating whole presentations. Some countries have already developed policies for ChatGPT use for their governmental officials\u201d (Ziegler et al., 2025).&nbsp;<\/p>\n\n\n\n<p>\u201cIn UN negotiations, there is a huge imbalance between countries with access to researching information and resources, and those which don\u2019t,&#8221; Ota adds. &#8220;We were asked by colleagues working with those within these negotiations to assess whether ChatGPT is safe to use and how useful it could really be. That&#8217;s how this project came to be.&#8221;<\/p>\n\n\n\n<p><strong>Looking Toward a More Equitable Future<\/strong><\/p>\n\n\n\n<p>\u201cWe are still hopeful that LLMs could yield some positive results for developing States and other marginalized actors, despite the equity concerns that we have outlined,\u201d note Ota and colleagues in their paper. They outline areas where AI can prove useful for improving equity in ocean governance, such as helping to draft legislation, understand policies, and aid in international consults (Ziegler et al., 2025). The research team remains optimistic, though cautious, about the future of AI technology\u2019s use in global ocean policy. They hope that their work helps open up further discussion at the table for how to leverage technology in equitable, accountable, and responsible ways among nations at the international negotiation tables.&nbsp;<\/p>\n\n\n\n<p>Ota shared how the team intends to use the study\u2019s findings to create a training program for policymaking. He said, \u201cWe are hoping to create a training program for the people in those negotiations. It\u2019s a short training to basically teach people that you can still use ChatGPT, but there are things you need to be careful of. We ran a workshop with the Ocean Voices group called <a href=\"https:\/\/mattziegler.net\/llms-for-policymakers-slides\/\">ChatGPT for Policymaking Practitioners<\/a><em>.<\/em>\u201d Another training, <a href=\"https:\/\/mattziegler.net\/designing-equitable-ocean-technology-slides\/\">Designing Equitable Ocean Technology<\/a>, is also available online. Through such trainings, Ota and Ziegler aim to work with policymakers to understand the risks of AI technology while developing equitable ways to incorporate such technology into marine policy. Ota adds, \u201cWe are actively engaging with those who are in ocean governance to understand this risk. This paper is like the evidence for us to say, \u2018We have a major publication, so we know that this technology is biased.\u2019\u201d&nbsp;<\/p>\n\n\n\n<p>You can <a href=\"https:\/\/www.nature.com\/articles\/s44183-025-00132-7#Abs1\">read the full article here<\/a>.&nbsp;<\/p>\n\n\n\n<p><em>Written by Yvonne Wingard, CELS Communications Fellow<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Professor of Marine Affairs Yoshitaka Ota has been working with colleagues to closely examine potential challenges and unintended consequences of in Artificial Intelligence use within marine policy.<\/p>\n","protected":false},"author":1089,"featured_media":19635,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[26],"tags":[],"class_list":["post-20601","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"acf":[],"_links":{"self":[{"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/posts\/20601","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/users\/1089"}],"replies":[{"embeddable":true,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/comments?post=20601"}],"version-history":[{"count":2,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/posts\/20601\/revisions"}],"predecessor-version":[{"id":20654,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/posts\/20601\/revisions\/20654"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/media\/19635"}],"wp:attachment":[{"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/media?parent=20601"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/categories?post=20601"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/web.uri.edu\/cels\/wp-json\/wp\/v2\/tags?post=20601"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}