Here's a warning from the UK (where government use of AI is ahead of the U.S., IMO) - as promising as AI is, we should not get too 'over the moon' about it just yet.
Executing government policy equitably and fairly requires transparency about how decisions are made. At the moment, we don't have a clear, standard way for really advanced AI to tell us how decisions were made - and, yes, sometimes the decision will surprise us and we cannot decipher how the neural network got there.
I'm confident we will develop such standards (there are way too many smart people working on it for it NOT to happen), but we aren't there yet. We should still continue to explore, but stay tuned on this question . . .
As AI increasingly assists tasks within the public sector, machines need to be publicly accountable for their decisions in the same way as other public servants. This means they need to be designed to be modifiable and auditable by the public service professionals they will work with.