Policy & Practice | Fall 2024

workers to be able to ask questions of (like state and federal policy).

efficiencies and not just invest in them because they are the next, new thing. Workers need to trust that the solution is providing accurate information and actionable results. Otherwise, they will continue to do the work the “old” way, which just

n Trust but verify. Even if you’re careful to provide an AI tool with the right information to help it under stand what you need, you should still do your own fact checking to verify the results are accurate. It’s critical that you can quickly and easily

Laying the Foundation There are several critical concepts that continue to guide the develop ment of Traverse. First and foremost, Northwoods firmly believes that AI tools, although technically capable, should not make any decisions for workers. These workers have spent countless hours of training in their profes sion and understand the human element that often cannot be translated to technology. Instead, AI tools should lift up and expose case information that is already there so the worker can ultimately use it to make more informed, confident decisions. Additionally, front-line workers and end users must be invited to participate in the development and imple mentation of AI tools, rather than other people making assumptions on their behalf. For example, Northwoods works closely with a subset of our customers who are regu larly testing new functionality and sharing their feedback, concerns, or suggestions to make the tools work how they need them to. Only after this focus group, as well as our internal social workers, gives the green light do we deploy new features to all customers. vendor developing AI solutions or an agency leader wanting to leverage them, there are a few things we can all do to ensure these tools complement workers’ professional capabilities, rather than compromising them. n Focus on efficiencies and outcomes. Agencies must verify that AI tools will truly create Lessons Learned Whether you’re a fellow

access the full source content that the tool is reading and ana lyzing to inform its results. This ensures that you have all the context and additional details needed to verify the accuracy of the tool’s findings. n Tread carefully with free tools. A broadly available tool (think ChatGPT) may seem like a quick and easy-to-use option for exploring AI, but it can cause significant security risks. On the other hand, many industry focused vendors have built their tools through a social services lens and have learned from child welfare policy, agency audits, Child and Family Services Reviews, and others, to support evidence-based best practices. They’ve also built in advanced privacy features to safeguard sensitive client information. Conclusion The rapid evolution of AI has created a paradigm shift across the industry, creating exciting possibilities for human services workers to achieve outcomes and efficiencies that may oth erwise be unattainable. Tools need to be developed with a focus on data privacy, respon sible decision making, and trust in their outputs to bring this future vision to life.

The rapid evolution of AI has created a paradigm shift across the industry, creating exciting possibilities for human services workers to achieve outcomes and efficiencies that may otherwise be unattainable.

duplicates their efforts. n Make sure data inputs are reliable. The results you’ll get from an AI solution are only as good as the data you feed into it. The adage “garbage in, garbage out” applies. AI tools won’t make an impact without complete and accurate data in place to feed into the system.

Steve Roth is the Chief Solutions Officer at Northwoods. As Chief Solutions Officer, Steve oversees product delivery, product management, and customer support. He also serves as General Manager of Case Aide Services.

Fall 2024 Policy & Practice 35

Made with FlippingBook - Online catalogs