AI: The Rock-Solid Case For Oversight and Accountability

By Eli Amdur

Sometimes, all you have to see is one small event, example, or case, and the big picture comes into compelling focus.

A report by Kashmir Hill and Tiffany Hsu (New York Times, 6/9/24) is convincingly about one of those times. According to the account by Hill and Hsu, “a fly-by-night journalism outlet called BBN Breaking had used an A.I. chatbot from another news site” and created some nasty falsehoods about a prominent broadcaster. It took very little time for this thing to catch fire and cause serious damage – all because of an A.I. mistake and, of course, no oversight and accountability – followed by a defamation lawsuit. The usual.

Of no surprise: this was not the only indiscretion resulting from BBN’s bad behavior, but they went inactive in April, leaving the detritus behind. The usual.

Why do I say, “the usual?” Because, sadly, it already is. A.I., in all it’s potential, has shown unlimited capability to do mind boggling things; coupled with humans’ willingness to cast off ethics, caution, and restraint in favor of instant gratification in the form of profit, competitive edge, anddomination, this is already SOP.

BBN, it seems, was peddling more than the inadvertent hallucination for which A.I. has become notorious. In their two years of operation, they purposely built an image of a legitimate news service, which old-timers like me remember and venerated, and claimed a worldwide lineup of “seasoned” journalists and millions of monthly visitors to their site.

A peek under the rug showed otherwise: multiple lengthy by-lines simultaneously by single authors, writing in thin prose that even the most naïve reader could detect was generated by A.I., and even freelancers writing faux news while masquerading as professional journalists.

If that’s not revolting enough, bad actors have gotten so far down the road already, that their algorithms are handing out assignments with no human oversight at all. I will bet a kilo of Dutch chocolate that the next step is the algorithm will hand an assignment … to itself … in an effort to cause extinction of humans in journalism.

The “2001: A Space Odyssey” Moment

Yes, you read that right. And if you’ve never seen that classic film “2001: A Space Odyssey”, now’s the time before A.I. does to us what it nearly did to Frank and Dave. “I’m sorry, Dave, I’m afraid I can’t do that…” was the tipping point where A.I. could have taken control and subjugated the human race forever. If that seems preposterous, see the movie (astonishingly, made by Stanley Kubrick in 1968 and taken from a 1952 book by Arthur C. Clarke). So far ahead of its time, that most people still don’t grasp it.

And now ak yourself if this doesn’t seem to ring true, with actors like BBN taking a “damn the torpedoes; full steam ahead” approach.

Can A.I. development and restraint coexist?

It’s evident, The developers of A.I. want no restraint in the form of ethics, governance, compliance, oversight, or accountability. Neither do the powerful and ominous dictators on the global scale. On the other side of the coin, there is increasing call for regulation, cooperation, and level-headedness.

Can the two coexist? Left to their own devices, no.

And that makes the case for oversight and accountability.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *