Hany Farid is a Professor at the UC Berkeley with a joint appointment in Electrical Engineering & Computer Science and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his M.S. in Computer Science from SUNY Albany, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.
November 18, 2020 04:00 PM PT
Digital Forensics Pioneer, UC Berkeley
We have all been the subject of a predictive computer algorithms of the form: "If you liked X, then you may like Y." Music and movie streaming sites and on-line shopping sites routinely analyze our previous listening, viewing, and shopping habits, compares them with other users, and then makes recommendations for us. Many of us have been subjected to a predictive algorithm of the form "If you are like X, then we may not give you a loan or a job."
Banks and employers routinely make lending and hiring decisions based on comparing your personal attributes with those of others. And, if you've recently had a run-in with the criminal justice system, you may have been subjected to a predictive algorithm of the form "If you are like X, then you may go to jail."
These predictive algorithms, trained on historical data, have been accused of reflecting and amplifying past racial and gender injustices instead of, as they are intended, removing them. We evaluate the claim that predictive computer algorithms are more accurate and fair than people tasked with making similar decisions. We also evaluate, and explain, the presence of racial bias in predictive algorithms used in the criminal justice system.
Data Editor, BuzzFeed News
BuzzFeed News data editor Jeremy Singer-Vine will describe the data efforts that underpinned the FinCEN Files — the recent investigative collaboration between BuzzFeed News, the International Consortium of Investigative Journalists, and more than 100 partner news organizations around the world. Using the FinCEN Files as a point of reference, he'll also discuss several ways in which the goals and techniques of data-for-journalism differ from data work in other fields.
June 24, 2020 05:00 PM PT
Kim Hazelwood - Deep Learning: It’s Not All About Recognizing Cats and Dogs (Facebook) - 5:22
Hany Farid - Creating, Weaponizing, and Detecting Deep Fakes (UC Berkeley) - 24:40
Deep Learning: It’s Not All About Recognizing Cats and Dogs
Based on a recent blog post and paper, this talk would focus on the fact that recommendation systems tend to be underinvested in the overall research community, and why that’s problematic.
Creating, Weaponizing, and Detecting Deep Fakes
The past few years have seen a startling and troubling rise in the fake-news phenomena in which everyone from individuals to nation-sponsored entities can produce and distribute misinformation. The implications of fake news range from a misinformed public to an existential threat to democracy, and horrific violence. At the same time, recent and rapid advances in machine learning are making it easier than ever to create sophisticated and compelling fake images. videos, and audio recordings, making the fake-news phenomena even more powerful and dangerous. I will provide an overview of the creation of these so-called deep-fakes, and I will describe emerging techniques for detecting them.