Deep learning models based on Z3
Modern cars have cameras that recognize traffic signs at the side of the road. For example, your car may detect that there is a “Stop” sign in front of you. They achieve this with clever machine learning models. Unfortunately, these models can be attacked by hackers. For example, they can slightly change a few pixels in the picture to trick the model into thinking it is a “Highway” sign, even though a human does not see any difference. Such an attack is called an adversarial attack.
To counter this, we can use several different defense techniques. They analyze the image to see whether it is a normal image or an adversarial attack. But it is still unclear which defense technique is the best; or how we can compare them properly.
In my thesis, I study several of these defenses, and I compare how well they perform by using techniques from the research area of formal verification.
Reinier Joosse
I agreed to write my thesis at Info Support because Info Support is actively engaged in applying Artificial Intelligence research. As AI is applied more and more in daily life, it becomes increasingly important for these methods to be explainable, secure and reliable. I hope that my research will contribute to the safety of future machine learning applications.