How City Leaders Are Exploring the Use of Generative AI to Improve Local Government
How Generative AI is Transforming City Services: Mayors Explore Chatbots, Automation and Responsible Implementation to Improve Transportation, Infrastructure and Public Safety
Generative artificial intelligence (AI) tools like ChatGPT are capturing the interest of city leaders for their ability to rapidly generate written content after being fed prompts. According to a recent survey by Bloomberg Philanthropies, 96% of mayors are interested in how they can leverage this emerging technology to enhance their local governments. While only 2% are actively deploying generative AI currently, 69% report they are in the exploration or testing phases.
City leaders see potential for generative AI to help address pressing issues around transportation, infrastructure, public safety, climate, education and more. However, broader implementation is hindered by factors like budget constraints, lack of technical expertise, and ethical considerations around security, privacy and transparency.
To help cities overcome these obstacles, nearly 100 mayors gathered last month in Washington D.C. for Bloomberg's 2023 Mayors Innovation Studio (MIS). The goal was to provide hands-on experience with generative AI and collaborate on strategies for how to most effectively and responsibly use it. Some key takeaways included:
- Appointing an in-house leader to explore uses, ask questions and stay updated on advances. This doesn't require deep technical skills, just curiosity and willingness to experiment.
- Testing constantly as the technology rapidly evolves. Visiting local colleges or businesses using AI can help cities visualize future applications.
- Balancing early guardrails with enough latitude for meaningful exploration. For example, Boston created simple guidelines like not using sensitive data and disclosing AI use to residents.
- Conducting an inventory of current AI use throughout city departments. This reveals existing capabilities and opportunities.
- Starting with low-risk applications like generating cultural content to build trust and experience before expanding to more sensitive tasks.
Designating an Exploratory Leader is Crucial
Since mayors need help navigating technical aspects and ethical implications of generative AI, designating an exploratory leader is advised. This person would devote time to learning about the technology, experimenting with applications, leading teams, asking critical questions and keeping abreast of advancements.
According to Beth Blauer, associate vice provost for public sector innovation at Johns Hopkins University, deep technical skills aren't required. Curiosity, willingness to try new things and making connections between AI capabilities and problem solving are more important. This leader can also share findings and provide training to build collective knowledge across city departments.
Constant Learning is Key as Technology Evolves Quickly
Because generative AI is continuously evolving, cities need to bake ongoing testing and learning into their strategies. Assumptions based on current limitations risk underestimating future potential.
"Don't freeze your image in your mind about what generative AI is based on what you see today, because it's evolving [and] probably getting better," advised MIT Professor Mitchell Weiss at the MIS event.
Buenos Aires exemplifies this approach with their chatbot Boti. Residents use Boti via WhatsApp to access services like bike sharing. Undersecretary of evidence-based public policies Melisa Breda says they constantly test because "the tools of the day go through evolution, they change."
Hands-on exploration at local colleges and businesses also helps city leaders visualize possibilities beyond the status quo. As Harvard Business School Professor Weiss noted, understanding realistic capabilities and limitations is a "precondition for deciding how we should and shouldn’t use these tools."
Guardrails Can Balance Exploration with Responsible Use
Mayors want to avoid missteps that could erode public trust or lead to biased outcomes. However, imposing overly restrictive policies too early risks blocking potentially transformative applications.
For example, flawed training data could cause generative AI to present incorrect information as fact or fail to represent certain groups. Even small mistakes can be magnified when interacting with many residents.
"We could lose the trust of our residents if they feel that, when they interact with us, they’re not going to get genuine reactions from us—that they’re just getting this machine," explained Santiago Garces, Boston's Chief Information Officer.
Boston created straightforward guidelines including not using sensitive data, disclosing AI use to citizens, and reviewing outputs for accuracy. These simple guardrails encourage exploration while mitigating obvious risks.
Inventories Reveal Existing Capabilities to Build Upon
With generative AI likely already in use across departments, understanding current applications is pivotal. This inventory allows cities to identify opportunities to expand on successful efforts.
According to Johns Hopkins' Blauer, "...you fundamentally need to know what's out there, and how it's being used." Residents and community groups may also be leveraging AI already.
Discussion channels help share information and ideas internally. Boston opened a Slack channel for employees to discuss AI potential, troubleshoot issues and track progress.
Beyond internal use, some cities are exchanging lessons learned through Bloomberg's new City AI Connect global learning community. This further accelerates knowledge development.
Starting with Low-Risk Applications Can Build Trust
When initially exploring generative AI, cities may want to focus first on low-risk applications like generating cultural content. This allows building experience before expanding into more sensitive tasks.
Buenos Aires took this approach to ensure early chatbots didn't breach trust or touch on controversial issues. Safeguards also prevented the technology from being prompted to display inappropriate content.
"There's a first layer of security that makes sure that neither the input nor the output contains information that we don't want to deliver," explained Undersecretary Breda.
Even seemingly mundane uses like drafting letters warrant review before public release. But this gradual expansion allows developing policies and safeguards at a measured pace.
Futurist Cara LaPointe, who co-directed Johns Hopkins' Institute for Assured Autonomy, suggested starting with "low risk, but, potentially, really high impact" opportunities. This pragmatic approach prevents getting bogged down by the hardest cases prematurely.
Transportation and Infrastructure Top Application Areas
According to the Bloomberg survey, transportation topped the list of areas mayors feel generative AI could help. Reducing traffic congestion, improving road safety, and increasing mobility options were cited as potential use cases.
Infrastructure followed closely behind, with AI applications around maintenance, inspections, permitting, construction planning and project management. Public safety, climate action, education and health also ranked highly.
Automatically generating status reports, memos and correspondence could aid staff productivity across all these domains. Other early examples include:
- Translating materials into multiple languages to reach diverse residents
- Drafting job descriptions to remove bias and attract diverse candidates
- Identifying patterns in data to optimize pickups for waste collection and recycling
- Providing interactive chatbots to handle routine resident inquiries and requests
- Automating redaction of sensitive information for public records requests
Risks Require Ongoing Vigilance as Applications Expand
While starting cautiously with low-risk applications can build trust and understanding, cities must remain vigilant as use cases expand.
Bias perpetuated through flawed training data remains an issue. Continual monitoring, testing and improvements of AI systems are required to identify and mitigate risks of unfair or inaccurate outcomes.
Transparency is another common concern. Knowing when you're interacting with an AI versus a human—and being able to review the AI's logic—are important for accountability and trust.
Cybersecurity threats are also magnified by using cloud-based systems. Strict data governance, access controls and safeguards against hacking will only grow in importance.
Furthermore, while AI can automate tasks, it may also disrupt workflows and potentially displace roles. Understanding wider impacts on employees will allow smoothing the transition by re-training and re-deploying affected staff.
With thoughtful leadership and responsible implementation, generative AI holds enormous potential as a transformative force for improving how cities serve residents. But unlocking its full promise will require confronting risks head-on through continuous engagement with staff, stakeholders, vendors and the public. If done right, cities have an opportunity to once again lead the way in responsibly adopting cutting-edge innovations for the public good.
Your own personal army of bots, continuously producing VIDEO CONTENT across all social media platforms.
Generating TRAFFIC, LEADS, and attracting CLIENTS, all for YOU!
Sit back, RELAX. Let Artificial Intelligence do the heavy lifting FOR YOU.
Your business transformed, a powerhouse of wealth, forging a secure future for you and your family.
But BEWARE… Those hesitant to adopt AI risk falling behind, their income dwindling, struggling to provide…
Don’t get left behind! Embrace the future.Â
Join our AI Creators Club.
*All images above were generated with AIÂ
Add To Cart
Your initial payment will be $1.
After 30 days, your subscription will be renewed monthly at $37/month.
You are basically are getting access to everything for only $1 for an entire month.Â
No contract, no obligations. You can cancel any time.