Publication:
Adversarial Learning through Red Teaming: From Data to Behaviour

dc.contributor.advisor Hussein, Abbass en_US
dc.contributor.advisor Kamran, Shafi en_US
dc.contributor.advisor Chris, Lokan en_US
dc.contributor.author Wang, Shir Li en_US
dc.date.accessioned 2022-03-21T11:41:46Z
dc.date.available 2022-03-21T11:41:46Z
dc.date.issued 2012 en_US
dc.description.abstract The concept of an adversary applies to all facets of the human life as they continue to dwell in a hostile world. The knowledge of an adversary's behaviour is therefore of paramount importance to us to make sound decisions and succeed in our lives. Red teaming is an ancient technique where an adversary's role is emulated by playing a devil's advocate role to improve own defences and decisions. The approach involves dealing with two competing entities, namely blue and red agents, and has been widely used in military planning to role-play the enemy; test and evaluate its course of actions or judgement; assess the vulnerabilities of the red team; and learn to understand the dynamics between the red and blue entities. However, the red teaming concept is easily mapped to domains that share similar characteristics to military planning, such as adversarial learning, risk assessment, and behavioural decision making in a competitive environment. Computational red teaming is a recent approach that extends red teaming concept in the cyber space and benefits from replacing the physical red and blue with simulated entities. The focus of this thesis is to study the effect of information on adversarial behaviour. A Computational Red Teaming based framework is developed to analyse four forms of an adversary or a red agent operating in a fixed self or blue agent's environment: a static red having direct access to manipulate randomly the information received by the blue agent; a dynamic red having the ability to learn and evolve to counteract blue's actions; a real human playing a red agent's role; and a red approximating human behaviours. To understand the impact of information, a statistical framework for simulating adversarial attacks is proposed to model and explore the effect of red. The heart of the simulation lies in attacks against representative samples of the training data available to blue, and the generation of attack is based on statistical sampling methods. The underlying assumption for the simulation is, red has the capability to identify representative samples and attacks them directly. Under the influence of red, the performance of blue, represented by a single neural network and neural ensemble, is evaluated in static and non-stationary environment. A synthetic red teaming game environment is then created to study the second, third and fourth forms of red. Here, CRT assists in the process of understanding the differences and similarities in a behavioural context between a computational red (machine learning agent) and a natural red (human). Neuroevolution is selected as the computational model, owing to its abilities to evolve and learn which are very important to mimic human behaviours. Besides that, neural networks are used to approximate human behaviours from the data collected in human red teaming. The literature lacks metrics and methodologies to analyse the behaviour of both machine and human red, and to compare these behaviours. The metrics and methodologies need to be able to represent, process, analyse, and compare the behaviours in an objective manner. Several metrics and methodologies are proposed in this thesis for the purpose of analysing and comparing the possible red's behaviours. The thesis demonstrates that: 1. Blind purposeful manipulation of data can be counteracted with an ensemble of learning machines. 2. In a demanding task, where the time to make a decision is very short, humans tend to ignore the information available to them and instead focus on using their skills to achieve the task. 3. A deceptive behaviour is beneficial in an environment where the frequency of receiving information is low and the noise in received information is high. 4. A deceptive behaviour is not beneficial in an environment where information is frequent and noise free. 5. Machine behaviour encompasses human behaviours but extend it with more creative behaviours. en_US
dc.identifier.uri http://hdl.handle.net/1959.4/52214
dc.language English
dc.language.iso EN en_US
dc.publisher UNSW, Sydney en_US
dc.rights CC BY-NC-ND 3.0 en_US
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/3.0/au/ en_US
dc.subject.other Behaviour en_US
dc.subject.other Adversarial learning en_US
dc.subject.other Red teaming en_US
dc.subject.other Neuroevolution en_US
dc.subject.other Deception en_US
dc.subject.other Perception en_US
dc.subject.other Neural network en_US
dc.subject.other Ensemble en_US
dc.title Adversarial Learning through Red Teaming: From Data to Behaviour en_US
dc.type Thesis en_US
dcterms.accessRights open access
dcterms.rightsHolder Wang, Shir Li
dspace.entity.type Publication en_US
unsw.accessRights.uri https://purl.org/coar/access_right/c_abf2
unsw.identifier.doi https://doi.org/10.26190/unsworks/15773
unsw.relation.faculty UNSW Canberra
unsw.relation.originalPublicationAffiliation Wang, Shir Li, Engineering & Information Technology, UNSW Canberra, UNSW en_US
unsw.relation.originalPublicationAffiliation Hussein , Abbass, Engineering & Information Technology, UNSW Canberra, UNSW en_US
unsw.relation.originalPublicationAffiliation Kamran, Shafi, Engineering & Information Technology, UNSW Canberra, UNSW en_US
unsw.relation.originalPublicationAffiliation Chris, Lokan, Engineering & Information Technology, UNSW Canberra, UNSW en_US
unsw.relation.school School of Engineering and Information Technology *
unsw.thesis.degreetype PhD Doctorate en_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
whole.pdf
Size:
5.6 MB
Format:
application/pdf
Description:
Resource type