Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

About me

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

awards

Best Paper Award

Published:

Awarded for paper included in the 13th International Conference on Malicious and Unwanted Software (MALWARE).

National Security Agency Research Team of the Year Award

Published:

For recognizing and promoting scientific excellence, research breakthroughs, and technological innovation that will enable NSA, the Intelligence Community, and Department of Defense to maintain and extend intelligence advantages.

portfolio

publications

Static Malware Detection & Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus

Published in The 13th International Conference on Malicious and Unwanted Software (MALWARE) Best Paper!, 2018

As machine-learning (ML) based systems for malware detection become more prevalent, it becomes necessary to quantify the benefits compared to the more traditional anti-virus (AV) systems widely used today. It is not practical to build an agreed upon test set to benchmark malware detection systems on pure classification performance. Instead we tackle the problem by creating a new testing methodology, where we evaluate the change in performance on a set of known benign & malicious files as adversarial modifications are performed. The change in performance combined with the evasion techniques then quantifies a system’s robustness against that approach. Through these experiments we are able to show in a quantifiable way how purely ML based systems can be more robust than AV products at detecting malware that attempts evasion through modification, but may be slower to adapt in the face of significantly novel attacks.

Recommended citation: Fleshman, W., Raff, E., Zak, R., McLean, M., & Nicholas, C. (2018). Static Malware Detection & Subterfuge: Quantifying the Robustness of Machine Learning and Current Anti-Virus. In The 13th International Conference on Malicious and Unwanted Software (MALWARE). https://fleshman.dev/files/subterfuge.pdf

Non-Negative Networks Against Adversarial Attacks

Published in AAAI-2019 Workshop on Artificial Intelligence for Cyber Security, 2019

Adversarial attacks against neural networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show the potential for non-negativity to be helpful to non-binary problems by applying it to image classification.

Recommended citation: Fleshman, W., Raff, E., Sylvester, J., Forsyth, S., & McLean, M. (2019). Non-Negative Networks Against Adversarial Attacks. In AAAI-2019 Workshop on Artificial Intelligence for Cyber Security. https://fleshman.dev/files/nonneg.pdf

Deception and the Strategy of Influence.

Published in National Security Agency’s The Next Wave, Vol. 23, No. 1, 2021

Organizations have long used deception as a means to exert influence in pursuit of their agendas. In particular, information operations such as propaganda distribution, support of antigovernment protest, and revelation of politically and socially damaging secrets were abundant during World War II and the Cold War. A key component of each of these efforts is deceiving the targets by obscuring intent and identity. Information from a trusted source is more influential than information from an adversary and therefore more likely to sway opinions. The ubiquitous adoption of social media, characterized by user-generated and peer disseminated content, has notably increased the frequency, scale, and efficacy of influence operations worldwide. In this article, we explore how methods of deception including audience building, media hijacking, and community subversion inform the techniques and tradecraft of today’s influence operators. We then discuss how a properly equipped and informed public can diagnose and counter malign influence operations.

Recommended citation: B., B., Fleshman, W., H., K., Kaliszewski, R., R.,S. (2020). Deception and the Strategy of Influence. In National Security Agency’s The Next Wave, Vol. 23, No. 1. https://www.govinfo.gov/content/pkg/GPO-TNW-23-1-2021/pdf/GPO-TNW-23-1-2021.pdf

Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection

Published in The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), 2021

Recent works within machine learning have been tackling inputs of ever-increasing size, with cybersecurity presenting sequence classification problems of particularly extreme lengths. In the case of Windows executable malware detection, inputs may exceed 100 MB, which corresponds to a time series with T=100,000,000 steps. To date, the closest approach to handling such a task is MalConv, a convolutional neural network capable of processing up to T=2,000,000 steps. The O(T) memory of CNNs has prevented further application of CNNs to malware. In this work, we develop a new approach to temporal max pooling that makes the required memory invariant to the sequence length T. This makes MalConv 116× more memory efficient, and up to 25.8× faster to train on its original dataset, while removing the input length restrictions to MalConv. We re-invest these gains into improving the MalConv architecture by developing a new Global Channel Gating design, giving us an attention mechanism capable of learning feature interactions across 100 million time steps in an efficient manner, a capability lacked by the original MalConv CNN. Our implementation can be found at https://github.com/FutureComputing4AI/MalConv2

Recommended citation: Raff, E., Fleshman, W., Zak, R., Anderson, H. S., Filar, B., & McLean, M. (2021). Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection. In The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI). https://fleshman.dev/files/extreme.pdf

Toucan: Token-Aware Character Level Language Modeling

Published in ArXiv, 2023

Character-level language models obviate the need for separately trained tokenizers, but efficiency suffers from longer sequence lengths. Learning to combine character representations into tokens has made training these models more efficient, but they still require decoding characters individually. We propose Toucan, an augmentation to character-level models to make them “token-aware”. Comparing our method to prior work, we demonstrate significant speed-ups in character generation without a loss in language modeling performance. We then explore differences between our learned dynamic tokenization of character sequences with popular fixed vocabulary solutions such as Byte-Pair Encoding and WordPiece, finding our approach leads to a greater amount of longer sequences tokenized as single items. Our project and code are available at https://nlp.jhu.edu/nuggets/.

Recommended citation: William Fleshman and Benjamin Van Durme, Toucan: Token-Aware Character Level Language Modeling, 2023. https://fleshman.dev/files/toucan.pdf

AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees

Published in ArXiv, 2024

Large language models (LLMs) are increasingly capable of completing knowledge intensive tasks by recalling information from a static pretraining corpus. Here we are concerned with LLMs in the context of evolving data requirements. For instance: batches of new data that are introduced periodically; subsets of data with user-based access controls; or requirements on dynamic removal of documents with guarantees that associated knowledge cannot be recalled. We wish to satisfy these requirements while at the same time ensuring a model does not forget old information when new data becomes available. To address these issues, we introduce AdapterSwap, a training and inference scheme that organizes knowledge from a data collection into a set of low-rank adapters, which are dynamically composed during inference. Our experiments demonstrate AdapterSwap’s ability to support efficient continual learning, while also enabling organizations to have fine-grained control over data access and deletion.

Recommended citation: William Fleshman, Aleem Khan, Marc Marone, and Benjamin Van Durme, AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees, 2024. https://fleshman.dev/files/adapterswap.pdf

RE-Adapt: Reverse Engineered Adaptation of Large Language Models

Published in ArXiv, 2024

We introduce RE-Adapt, an approach to fine-tuning large language models on new domains without degrading any pre-existing instruction-tuning. We reverse engineer an adapter which isolates what an instruction-tuned model has learned beyond its corresponding pretrained base model. Importantly, this requires no additional data or training. We can then fine-tune the base model on a new domain and readapt it to instruction following with the reverse engineered adapter. REAdapt and our low-rank variant LoRE-Adapt both outperform other methods of fine-tuning, across multiple popular LLMs and datasets, even when the models are used in conjunction with retrieval-augmented generation.

Recommended citation: William Fleshman and Benjamin Van Durme, RE-Adapt: Reverse Engineered Adaptation of Large Language Models, 2024. https://fleshman.dev/files/readapt.pdf

RE-AdaptIR: Improving Information Retrieval through Reverse Engineered Adaptation

Published in ArXiv, 2024

Large language models (LLMs) fine-tuned for text-retrieval have demonstrated state-of-the-art results across several information retrieval (IR) benchmarks. However, supervised training for improving these models requires numerous labeled examples, which are generally unavailable or expensive to acquire. In this work, we explore the effectiveness of extending reverse engineered adaptation to the context of information retrieval (RE-AdaptIR). We use RE-AdaptIR to improve LLM-based IR models using only unlabeled data. We demonstrate improved performance both in training domains as well as zero-shot in domains where the models have seen no queries. We analyze performance changes in various fine-tuning scenarios and offer findings of immediate use to practitioners.

Recommended citation: William Fleshman and Benjamin Van Durme, RE-AdaptIR: Improving Information Retrieval through Reverse Engineered Adaptation, 2024. https://fleshman.dev/files/readaptir.pdf

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.