Feature

New Arlington startup emerges with a goal of helping organizations adopt AI responsibly

Sponsored by Monday Properties and written by ARLnow, Startup Monday is a weekly column that highlights Arlington-based startups, founders, and local tech news. Monday Properties is proudly featuring 1515 Wilson Blvd in Rosslyn. 

A Rosslyn-based startup says it is on a mission to help companies adopt artificial intelligence responsibly.

The company, Trustible, announced in mid-April that it emerged from “stealth” — a quiet period of growth and initial fundraising — with an “oversubscribed” $1.6 million in “pre-seed” funding, tech news outlet Technical.ly D.C. first reported.

That money will go toward hiring employees and improving its government compliance solutions. These are aimed at helping companies demonstrate they are following emerging government regulations, such as those poised for adoption by the U.S. and the European Union, per a press release.

As this technology rapidly improves, companies worldwide are racing to adopt and adapt to it. In that haste, however, Trustible founders Gerald Kierce and Andrew Gamino-Cheong worry organizations could wind up not complying with government regulations and unleashing harmful applications of AI.

“AI is becoming a foundational tool in our everyday lives — from business applications, to public services, to consumer products,” they wrote in a blog post last month. “Recent advances in AI have dramatically accelerated its adoption across society — unquestionably changing the way humans interact with technology and basic services.”

Trustible founders Gerald Kierce, left, and Andrew Gamino-Cheong (courtesy photo)

Companies ramping up their use of AI are entering uncharted waters, however. The founders say these organizations have to answer tricky questions like whether AI can be biased and who is liable AI breaks the law or produces results that are not factual. They worry about misuses such as wrongful prosecution, unequal health care and national surveillance.

“With great power comes great responsibility,” they say. “Despite good intentions, organizations deploying AI need the enterprise tools and skills to build Responsible AI practices at scale. Moreover, they don’t feel prepared to meet the requirements of emerging AI regulations.”

That is why demonstrating trust in AI will be key to it being adopted successfully, say Kierce and Gamino-Cheong.

“Many of the challenges we’ve outlined require interdisciplinary solutions — they are as much of a technical and business problem as they are socio-technical, political, and humanitarian,” per the blog post. “But there is a critical role for a technology solution to accelerate Responsible AI priorities and scale governance programs.”

That is where Trustible comes in. It provides all the minutiae companies need — checklists, documentation tools and reporting capabilities — to adopt AI as governments try and concurrently develop ways to regulate it.

The platform helps organizations define policies, implement and enforce ethical AI practices and prove they comply with regulations, in anticipation of compliance reviews and AI audits.

Trustible logo

Already, the U.S. and Europe appear poised to adopt regulations, they say.

In the U.S., the National Institute of Standards and Technology has released a framework the founders believe will inform any pending federal regulations. Meantime, the White House has released an “AI Bill of Rights” the founders say serves as a blueprint for institutions looking to develop internal AI policies.

The European Union is expected to adopt AI regulations that could become a blueprint for other countries. Already, Kierce and Gamino-Cheong say, 127 countries have enacted legislation containing the term “artificial intelligence.”

“Every organization developing or procuring AI will need to understand how these rules differ from each other, what internal controls must be put in place, and how to prove compliance across jurisdictions,” they wrote in their blog post.

Kierce and Gamino-Cheong are both veterans of D.C.-based software, data and media company FiscalNote, which uses AI in its deliver of global policy and market intelligence.

“The pair had spent nearly a decade applying AI to the policy landscape and are now applying global policy to the AI space,” per the press release. “FiscalNote was recently selected as one of OpenAI’s first plugin partners for ChatGPT.”

Trustible decided to incorporate as a Benefit Corporation, like Patagonia, meaning it has to meet higher regulatory standards to show that it benefits society.