Ars Frontiers recap: What happens to developers when AI can code?

Our second AI panel of the day, featuring Georgetown University's Drew Lohn (center) and Luta Security CEO Katie Moussouris (right). Skip to 3:01:12 if the link doesn't take you directly there. Click here for a transcript of the session.

The final panel of the day at our Frontiers conference this year was hosted by me—though it was going to be tough to follow Benj's panel because I didn't have a cute intro planned. The topic we were covering was what might happen to developers when generative AI gets good enough to consistently create good code—and, fortunately, our panelists didn't think we had much to worry about. Not in the near term, at least.

Joined by Luta Security founder and CEO Katie Moussouris and Georgetown Senior Fellow Drew Lohn, the general consensus was that, although large language models can do some extremely impressive things, turning them loose to create production code is a terrible idea. While generative AI has indeed demonstrated the ability to create code, even cursory examination proves that today's large language models (LLMs) often do the same thing when coding that they do when spinning stories: they just make a whole bunch of stuff up. (The term of art here is "hallucination," but Ars AI expert Benj Edwards tends to prefer the term "confabulation" instead, as it more accurately reflects what it feels like the models are doing.)

So, while LLMs can be relied upon today to do simple things, like creating a regex, trusting them with your production code is way dicier.

Read 4 remaining paragraphs | Comments

⦿Source