Tuesday, October 15, 2024

Humanizing AI with Tariq King (a PNSQC Live Blog)

I've always found Tariq's talks to be fascinating and profound and this time around we're going into some wild territory.

AI is evolving, and with each new development, it’s becoming more "human". It’s not just about executing tasks or analyzing data—it’s about how AI communicates, adapts, and even imitates.

So AI is becoming human... in how it communicates. That's a big statement but with that qualifier, it is more understandable. AI is no longer a cold, mechanical presence in our lives. Today’s AI can respond based on context, understanding the tone of requests and adjusting replies accordingly. It can mimic human conversation, match our language, and create interactions that feel amazingly real. Whether you’re chatting with a customer service bot or getting personalized recommendations, AI can engage with us in ways that were once the domain of humans alone.

Okay, so if we are willing to say that AI is "becoming human", how should we shape these interactions?What should the boundaries be for AI communication, and how do we ensure it serves us, rather than replaces us?

Beyond just communication, AI is showing remarkable creativity. AI can now write stories, compose music, and generate art, ranging from wild and weird to quite stunning (I've played around with these for several years, and I have personally seen the development of these capabilities and they have indeed become formidable and impressive). What once seemed like the exclusive realm of human creativity is now being shared with machines. AI is no longer just a tool—it’s being used as a collaborator that can generate solutions and creative works that blur the line between human and machine-generated content.

Tariq points out that this raises some significant and critical questions.  Who owns AI output? How do we credit or cite AI authorship? How do we confirm the originality of works? Perhaps more to the point, as AI generates content, what is the human role in the creative process? And how do we ensure that the human element remains at the forefront of innovation?

AI is getting better at how convincingly it can imitate humans. But there’s a caveat: AI is prone to hallucinations, meaning it can produce plausible and relatable material that feels right for the most part but may be wrong (and often is wrong). I have likened this in conversations to having what I call the "tin foil moment". If you have ever eaten a food truck burrito (or any burrito to go, really) you are familiar with the foil wrapping. That foil wrapping can sometimes get tucked into the folds and rolls of the burrito. Occasionally, we bite into that tin foil piece and once we do, oh do we recognize that we have done that (sometimes with great grimacing and displeasure). Thus, when I am reading AI-generated content, much of the time, I have that "tin foil" moment and that takes me out of believing it is human (and often stops me being willing to read what follows, sadly).

The challenge here is not just humanization. We need to have critical oversight over it so that we can have it do what we want it to do and not go off the rails. How do we prevent AI from spreading misinformation? And how can we design systems that help us discern fact from fiction in a world where AI-generated content is increasingly common?

Okay, so we are humanizing AI... this begs a question... "Is this something we will appreciate or is it something that we will fear?" I'm on the fence a bit. I find a lot of the technology fascinating but I am also aware of the fact that humanity is subject to avarice and mendacity. Do we want AI to be subject to it as well, or worse, actively practice it? What unintended consequences might we see or incur? 

For some of you out there, you may already be thinking of some abstract idea called "AI Governance", which is the act of putting guardrails and safety precautions around AI models so that they perform as we want them to. This means setting clear ethical guidelines, robust oversight mechanisms, and working to ensure that AI is used in ways that benefit society. More to the point, we need to continuously monitor and work with AI to help ensure that the data that it works with is clean, well-structured, and not poisoned. That is a never-ending process and one we have to be diligent and mindful of if we wish to be successful with it. 

Make no mistake,  AI will continue to evolve. To that end, we should approach it with both excitement and caution. AI’s ability to communicate, create, and imitate like humans presents incredible opportunities, but it also brings with it significant challenges. Whether AI becomes an ally or a threat depends on how we manage its "humanization".

1 comment:

Nate Custer said...

I think this conversation is brushing up against some profound philosphical questions without diving too deep into them. The question: "what does it mean to be human?" or "What makes human's different then animals?" are things people have discussed for ages. I don't see where this talk really engages with that tradition. If we say that what makes us human is our ability to communicate in ways other humans recognize - this has lead in the past to dismissing the intelligence and humanity of indigenous people and in more recent times lead to abuse of people who's disabilities make it a challenge to speak.

I suspect the questions around credit and ownership of works created with new tools is something we need to think carefully about. I was really inspired by a talk one of my colleagues made. Ashton pointed out that one of the reactions to the invention of cameras was the rise of impressionism and some great works of art. Artists asked what can we capture about this scene that the new tools can't and creativity expanded. Laws like copyright are social contracts designed to promote creativity and make it possible for artists to survive.