A self-driving car approaches a stop sign, but instead of slowing down, it accelerates into the busy intersection. An accident report later reveals that four small rectangles had been stuck to the face of the sign. These fooled the car’s onboard artificial intelligence (AI) into misreading the word ‘stop’ as ‘speed limit 45’.
There are instances of deceiving facial recognition systems by sticking a printed pattern on glasses or hats and tricking speech recognition systems using white noise.
AI is part of daily life, running everything from automated telephone systems to user recommendations on the streaming service Netflix. Yet making alterations to inputs — in the form of tiny changes that are typically imperceptible to humans — can fool the best algorithms around.
This panel discussion comprising of leading researchers and industry leaders will try to touch upon some of these aspects and thus shed more light on how we can build more robust AI if they are really that easy to fool.
Key Takeaways:
Is current AI technology ready to be involved in critical domains such as healthcare, driving etc.
Adversarial attacks and ways to tackle them.
What kind of research focus can help build robust AI algorithms?