It’s only been a few months since Open AI released ChatGPT and already it feels like AI is an old trend. Everyone is talking about it. We’re guilty of it too—shamelessly so—and, you know what?, we’ll continue to be!

Welcome to the first issue of the Neural Network Notebook: a regular, quick hit of our insights and speculation on the interaction of artificial intelligence and the law.

Why this; why now?

Sure, artificial intelligence has been around for a while. Even chatbots aren’t new. But within the last year or so—maybe starting with OpenAI’s release of Dall-E last year—it has become the zeitgeist. This generation of AI—generative AI—has popularized and democratized artificial-intelligence tools. AI is, all of a sudden, available to everyone.

Let’s put this in context.

Humans have explored the idea of intelligent, conscious machines for hundreds of years. In 1726, Jonathan Swift presaged ChatGPD in Gulliver’s Travels. The Engine, as he called it, was a contraption designed by the Academy in Lagado and used to generate ideas, particularly ideas for writing. The Engine generated text by rearranging wooden cubes inscribed with words. Although intended as a critique of what Swift saw as the overly mechanistic approach to knowledge and creativity, the Engine foreshadowed the development of AI algorithms that process and generate human-like text. (Perhaps there’s a lesson still to be learned from Swift’s criticism.)

“True” AI started to take shape in the mid-20th century, with pioneers like Alan Turing. The 1956 Dartmouth Summer Research Project on Artificial Intelligence marked the beginning of AI research as a field and was likely when the term was coined. From then on, as computational power and access to data increased, so too did the intelligence of AI.

It makes sense that the renaissance of AI comes on the heels of the age of big data. Data is like oxygen to AI and has spurred a Cambrian explosion of new lifeforms. The age of big data emerged in the late 1990’s-early 2000’s. With all this data, the dreams of folks like Turing are at last being realized.

Now—as we’ve discussed previously—over the past 30 years, US lawmakers haven’t done much to curb the use and misuse of data by big companies and the government. Data regulation has fallen to the few states, like California, willing to address 21st century issues.

Given that context, you can imagine how woefully behind our framework for regulating AI and its algorithms is. A few prominent voices (including, famously, Elon Musk) have been warning us about the risks of AI. When even the most vehement champions of technological progression are shouting warnings, we should pause and consider the implications of charging forward without thinking through how these new technologies can be implemented in a safe, responsible way.

So this is where we are. Where AI meets the law we currently have … empty space.

Of course, existing legal frameworks can and will step in to fill the void. In some cases that’s enough. In others it’s not.

We will help keep you informed as the law lurches and stumbles its way to keeping up with the new wave of artificial intelligence.

This technology has immense potential. And yet we have already seen AI’s propensity to make biased hiring decisions, infringe on intellectual property, fabricate facts, spew propaganda, breach data, and take away jobs. Presently some 50% of AI experts predict that uncontrolled AI has at least a 10% chance of leading to humanity’s extinction. The law has a lot to catch up to.

It will take time to develop a framework that works. We have seen some regulating bodies take steps towards regulation, and we have good evidence to hypothesize some of the considerations and steps that may be taken in the US and around the world. But AI is moving at an unprecedented pace. And law, well, the law is not known for its speed. So buckle in. We expect a bumpy ride.

We will continue to explore these and other AI-related legal topics in our new Neural Network Notebook. Stay tuned!