All right, here we go with Day 2!
First of all, I want to give props to Joe Colantonio and everyone else for the management and efforts of yesterday to keep everything on track. For those who are wondering what it is like to do a conference totally online, it's not always seamless but Joe handled issues and problems like a pro. Some things that have been interesting changes:
There is a Virtual Expo so if you want to see what vendors are showing and do so with your own time and focus, you can check out the Virtual expo by clicking here.
The question and answer is being handled through an app called Slido and that makes for a clean way to ask questions and interact with each speaker rather than have to try to manage a Zoom chat feed. Again, a neat approach and well presented.
So for today's opening keynote, it's exciting to see friends that I interact with getting to be keynote speakers. Raj Subrameyer and I have interacted together for several years. He was also a recent guest on the Testing Show podcast talking about Tech Burnout (to be clear, not the topic of today's talk). If you'd like to hear our interview with Raj, you can check that out by clicking this link here.
Raj's talk is focused on building Inclusive AI. Sounds scary, huh? Well, it doesn't need to be. He opens with using three movies (2001: a Space Odyssey, HER, and Ex Machina). What was interesting about these movies is that they were science fiction and now, they are science fact. the point is sci-fi has caught up with our present. The question we might want to ask is, is this a good thing? It all comes down to how you look at it. Are you using Siri or Alexa regularly? To be honest, I do not use these very often but I have worked with them, so I'm not a Luddite. Still, there is a small part of me that doesn't want to rely on these tools just yet. Is that a fear-based thing? A trust-based thing? Maybe a little bit of both. Do I really want to have these AI systems listening in on me? Well, if I'm someone who uses apps like Google, Amazon, Facebook, Instagram, or TikTok (hey, don't judge) I'm already training these systems. Alexa is just a voice attached to a similar system.
let's face it, technology can be creepy. It can also be very interesting if we understand what is happening. AI systems are getting trained all of the time. facial recognition, text recognition, voice recognition, these all are tweaked in similar ways. As Tariq King explained in a talk last year at Testbash in San Francisco, it's not anything sinister or even terribly complex. Ultimately, it all comes down to agents that keep score. When an agent gets something right, they keep a counter of the number of times they have successfully guessed or provided the right answer. They likewise decrement counters when they get things wrong. Over time, the counter helps figure out what is right more times than not. It's not perfect, it's not even intuitive, but it's not really super-human or even all that complicated. we just tend to make it and treat it as such.
Raj points out that the neural network inside of each of our brains has a number of synaptic connections that, when calculated, equals the number of stars in our galaxy (maybe more) and to quote James Burke, "everybody has one!" The most powerful computers still pale in comparison to the connectivity and plasticity of a single human brain (though interconnected systems can certainly match or exceed single brains).
AI can be classified as weak and strong. Most of the systems that we interact with currently are classified as Weak AI systems. Yes, they can be trained to give a response and they can perform specific steps. Systems like Deep Thought can play chess and beat the best human players, but that is still an example of Weak AI. In short, the system can brute force avenues and do it fast, but it can't really "think". Strong AI can think, and emote, and sympathize, and deal with situations in dynamic ways the way people do. So far, there are very few AI systems that can do that, if any, really.
I'll use an example from my own musical life. I've recently been shopping for guitar amplifier heads, older ones. My all-time favorite guitar tone ever comes from the Marshall JMP amplifier head, which was popular in the early to mid-1970s. Additionally, I also very much like the Carvin X100B Series III amplifier head. A Weak AI would be able to compare specs of both amps and give me a readout of which amp may have the best reaction to fault tolerance or to sonic frequencies. It will not, however, be able to tell me which amplifier head "sounds better". That's a human judgment and it's not something that data will necessarily be able to provide an answer for.
We may be familiar with the study that was done where resumes were submitted using both typically "white" names and also resumes with "black names" (or traditionally seen as white or black names), the AI system was trained on the group of data and, interestingly, it would reject resumes with "black" names twice as often as it would "white" names. That definitely invites a question... how did the system "learn" to do that? Was it trained to do that purely based on the text in the resumes, or did some bias enter the system from the programmers? It's an interesting question and hey, I know what I think about this (hint: humans biased the system but I asked a Slido question, so let's see if it gets answered later ;) ).
Another thing to consider is that AI can be abused and it can also be fooled. In the world today with applications like Photoshop and video editing, deep fakes can be created. Provide enough deep fakes and systems can be trained with literally fake information and those systems can develop agent counts that are not based on reality. Scary but definitely feasible.
Ultimately, AI is as good as the data it is provided and the people that program the systems and the algorithms that train them. Systems can "learn" but again, learning is having a weighted count of something. the more it's "right", the higher the count, and the greater the odds that the "right" answer will actually be "right" in this case. Interesting stuff, to be sure but I'd argue that the odds of these systems coming together and replacing human intuition and interaction is quite a way away. that's not an invitation to be complacent, it's a recommendation to spend time to learn about these systems and how to better understand them and interact with them, and also that we have a responsibility to make sure that the systems we build are not just good quality but also fair to everyone.
No comments:
Post a Comment