Possible Future of AI
Based on ideas in, Its Alive!, La Trobe University Press 2017 ands slides
Note: none of this material is examinable
Copyright By Assignmentchef assignmentchef
Where have we been
1950 Turing predicts thinking machines by 2000
1956 Dartmouth Summer Research Project on AI
1965 Dendral expert system for reasoning about molecular chemistry 1969 Perceptrons (early neural network) by Minsky and Papert
1972 Shakey the robot (computer vision, path planning, A* search) 1984 Cyc project to encode all commonsense knowledge
1986 Backpropagation for multi-layer neural networks
1997 Chess grand master Kasparov loses to IBM Deep Blue
2005 DARPA Grand Challenge for autonomous vehicles won
2011 IBM Watson wins Jeopardy! game show
2015 AlphaGo uses deep RL and tree search to beat Go master
DARPAs third wave of AI
1st wave: handcrafted knowledge, 2nd wave: statistical learning 3rd wave: contextual reasoning
AI functions more as colleague than as tool
: real-time analysis of sophisticated cyber attacks, detection of fraudulent imagery, human language technologies, control of prosthetic limbs
Robust AI: reliable and verifiable operation in complex environments High Performance AI: 1000x faster, 1000x less power
Next Generation AI: explainable AI, ethical AI, common sense AI https://www.darpa.mil/work-with-us/ai-next-campaign
An example scenario
How can we tell the difference between real news and fake news online?
Acknowledgements for this work
,,,,,,,,,,
University of Melbourne
Olivier de Vel,,,
Defence Science and Technology Group
Deep Learning is Everywhere
How can it be used to combat How can adversaries disrupt defences disinformation campaigns? that use machine learning?
Generative AI
https://www.synthesia.io/
An Example of Disinformation that Fooled Me
Scenario: Social Botnet for Disinformation
Malicious actors often use a botnet of automated accounts on a social media platform such as Twitter to amplify the impact of their influence campaigns, i.e., a force multiplier for trolls
How does a botnet work?
(1) Infiltrate target community on social media (2) Use botnet to influence discourse
Step 1: Infiltrate Target Community
1. Attach social bot software to automate new/repurposed Twitter accounts
2. Pick a community of users to target
3. Each bot follows a subset of popular users in that community (red links)
4. Bots in target community follow each other (black links)
5. Bots make several posts before interacting with real users (manual tweets, plagiarise tweets, synthetic tweets)
6. Repeat steps 3-5 until a sufficient number of bots have been followed by users in the target community
= user = bot
Step 1: Infiltrate Target Community
1. Attach social bot software to automate new/repurposed Twitter accounts
2. Pick a community of users to target
3. Each bot follows a subset of popular users in that community
4. Bots in target community follow each other
5. Bots make several posts before interacting with real users (manual tweets, plagiarise tweets, synthetic tweets)
6. Repeat steps 3-5 until a sufficient number of bots have been followed by users in the target community
= user = bot
Step 1: Infiltrate Target Community
1. Attach social bot software to automate new/repurposed Twitter accounts
2. Pick a community of users to target
3. Each bot follows a subset of popular users in that community
4. Bots in target community follow each other
5. Bots make several posts before interacting with real users (manual tweets, plagiarise tweets, synthetic tweets)
6. Repeat steps 3-5 until a sufficient number of bots have been followed by users in the target community (blue links)
= user = bot
Step 2: Use Botnet to Influence Discourse
1. Spread politically motivated rumours
(fake grassroots activity to give impression of popular support)
2. Promote disinformation news sites
3. Create sufficient noise to disrupt reasonable discussions
(1) #BigfootInOZ
(2) http://bigfootpics.com.au
(3) #Bigfoot4PM
#YetiSux #DropBears
#BigfootCausedCovid
How to Program a Bot
Bots can be implemented using a scripting language to specify pre- programmed behaviours in response to received tweets and messages
Source: https://www.labnol.org/internet/write-twitter-bot/27902/
Social Cyber Security
What decisions do analysts need to make to disrupt these
botnets? Q: Is this an automated
Q: Who/what is the target of the botnet?
Q: Is this tweet abnormal for this account?
Q: Are these accounts acting together to form a botnet?
Q: What level of influence does the botnet have on the target community?
Helping to Automate these Decisions
How can artificial intelligence (AI) and machine learning (ML) be used to help analysts make these decisions?
Q: Is this an
Q: Is this tweet abnormal for this account?
Q: Are these accounts acting together to form a botnet?
Classification
Anomaly detection
Clustering
Classification
Can we learn a model that predicts the category of a given example?
Clustering
What are the natural categories in a dataset?
Consider a collection of animals.
How many different types of animals are there here?
Anomaly Detection
Can we learn a model of what is normal so that we can spot anomalies?
Deep Learning for Fake News Detection
Propagation Network of Tweet
Anomaly Detection
Textual Content of Tweet
Classification
Genuine / Fake
Silva, Luo, Karunasekera, Leckie (2021) Embracing Domain Differences in Fake News: Cross-domain Fake News Detection using Multimodal Data. AAAI 2021
Clustering
So What Could Go Wrong?
Source: Winnetka Animal Hospital
So What Could Go Wrong?
Intelligent adversaries know they are being monitored by a system based on machine learning
Adversaries can modify their behaviour to manipulate the machine learning model into making the wrong decision
Types of adversarial attacks on machine learning:
Poisoning the training data to bias the learned model
of normal behaviour
Manipulating the test data in ways that are imperceptible to humans to fool the learned model Adversarial Noise
Example of Adversarial Noise
Adversarial examples, using PGD with and with noise constraint of on n02085936_6883.jpeg of ImageNet (Deng et al., 2009) dataset. The leN image is the original, and the right image is the original plus the noise image shown in the middle. For humans, the differences between the original image and the original plus noise image are hardly visible. For DCNNs, the noise leads to serious misclassification.
Machiraju, Harshitha & Choung, Oh-hyeon & Frossard, Pascal & Herzog, Michael. (2021). Bio-inspired Robustness: A Review
Key concepts of AI safety
Robustness
AI system must operate safely under a wide range of conditions but
ability to quantify confidence of a prediction
ability to recognize a setting it was not trained for
Assurance
human operators must understand why the system behaves the way it does (is the system meeting expectations)
Explainable-AI
Specification
Specification of machine learning systems refers to defining a systems goal in a way that ensures its behavior aligns with the human operators intentions.
What Can We Do About Adversarial Attacks on ML
Examples of work on Adversarial ML:
1. Detection and filtering of adversarial examples during training
2. Identifying new types of adversarial attacks so that we can devise better defences
3. Developing machine learning models that are resistant to attacks during testing / deployment
Conclusion
Botnets are critical infrastructure for disinformation campaigns
AI and ML can help automate the detection of these botnets, as a key step in defending against these campaigns
However, adversarial ML creates a new attack surface that can disrupt our AI-based defences
Challenges for the Future
While we can use AI to help automate defence,
attackers can use AI to improve their attacks
What will an AI-enabled botnet look like?
In this AI arms-race, does AI favour the attacker or defender?
Botnet Resources
Reverse Engineering Socialbot Infiltration Strategies in Twitter,. Freitas, Fabricio Benevenuto,,, https://arxiv.org/abs/1405.4927
Reverse engineering Russian Internet Research Agency tactics through network analysis,,, https://stratcomcoe.org/download/file/fid/80484
Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration, Howard,,, https://www.tandfonline.com/doi/full/10.1080/19331681.2018.1448735
BotCamp: Bot-driven Interactions in Social Campaigns,
-El-Rub and, https://www.cs.unm.edu/~nabuelrub/BotCamp/
Social Cybersecurity: An Emerging National Security Requirement,
. Beskow,. Carley, https://www.armyupress.army.mil/Journals/Military- Review/English-Edition-Archives/Mar-Apr-2019/117-Cybersecurity/
Adversarial Learning Resources
ExplainingandHarnessingAdversarialExamples,IanJ.Goodfellow,JonathonShlens& , https://arxiv.org/pdf/1412.6572.pdf
RobustPhysical-WorldAttacksonMachineLearningModels,IvanEvtimov,KevinEykholt, ,, Bo Li,,, and, https://arxiv.org/pdf/1707.08945.pdf
PracticalBlack-BoxAttacksagainstMachineLearning,NicolasPapernot,PatrickMcDaniel,SomeshJha, Z., Ananthram Swami, https://arxiv.org/pdf/1602.02697.pdf
CharacterizingAdversarialSubspacesUsingLocalIntrinsicDimensionality.XingjunMa,BoLi, ,. Erfani,,,. Houle,,. https://openreview.net/pdf?id=B1gJ1L2aW
AdversarialExamplesAreNotBugs,TheyAreFeatures,AndrewIlyas,ShibaniSanturkar, ,,,, NeurIPS 2019, https://arxiv.org/abs/1905.02175
Kagglecompetition:https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack
Deep Learning Resources
Reading List
http://neuralnetworksanddeeplearning.com/chap1.html
http://neuralnetworksanddeeplearning.com/chap2.html Further Resources
http://www.wired.com/2014/01/geoffrey-hinton-deep-learning http://chronicle.com/article/The-Believers/190147/
https://class.coursera.org/neuralnets-2012-001 https://www.coursera.org/course/ml
http://www.deeplearningbook.org/
So, back to the future
How far can we go with AI?
Predictions of human AI parity
In 2012, Mueller and Bostrom surveyed AI researchers:
when is it 50% likely we will build a machine that does most jobs at least as well as an average human?
Median response: 2040
when is it 90% likely that high-level machine intelligence is achieved? Median response: 2075
What are the ethical limits of AI?
Trolley car dilemma
Algorithmic discrimination
Privacy vs public good
Humans and machines are indistinguishable Killer robots
Equity: AI winners and AI losers in society
Image source: Wikipedia (Creative Commons)
and AI will take over our Jobs?
How to quantify the automation risk?
assessing to what extent robotics and AI abilities can replace
human abilities required for over 1000 jobs
Occupational Information Network (O*NET) description of jobs (skills, abilities, knowledge)
European H2020 Robotics Multi- Annual Roadmap (MAR)
Technological readiness level (TRL)
and AI will take over our Jobs?
ARI index corresponding to the proportion of a job that a robot/AI could also do
Paolillo, Antonio, et al. How to compete with robots by assessing job automation risks and resilient alternatives. Science Robotics 7.65 (2022): eabg5561.
10 Predictions for 2050
1. You are banned from driving
2. You see the doctor daily
3. Marilyn Monroe is back in the movies
4. A computer hires and fires you
5. You talk to rooms
6. A robot robs a bank
7. Germany loses to a robot soccer team
8. Ghost ships, planes and trains cross the globe
9. TV news is made without humans
10. We live on after death
What are your predictions?
Can you make a prediction of what AI will be able to do by 2050? Email your Wafa by 9am Thu
Well report any gems in Thursdays final lecture
We will also give the results of the Project Tournament!
CS: assignmentchef QQ: 1823890830 Email: [email protected]
Reviews
There are no reviews yet.