Computer Science Technical Reports
CS-2007-01 (14 pages) (pdf)
Title Detecting Anomalies by Weighted Rules Authors Gaurav Tandon and Philip K. Chan Contact Email Address Faculty Sponsor Philip K. Chan TR number assignment date 11 January 2007 TR posted 19 January 2007
Anomaly detection focuses on modeling the normal behavior and identifying significant deviations, which could be novel attacks. The previously proposed LERAD algorithm can efficiently learn a succinct set of comprehensible rules for detecting anomalies. We conjecture that LERAD eliminates rules with possibly high coverage, which can lead to missed detections. This study proposes weights that approximate rule confidence and are learned incrementally. We evaluate our algorithm on various network and host datasets. Compared to LERAD, our technique detects more attacks at low false alarm rates with minimal computational overhead.
CS-2007-02 (10 pages) (pdf)
Title Spatio Temporal Anomaly Detection for Mobile Devices Authors Gaurav Tandon and Philip K. Chan Contact Email Address Faculty Sponsor Philip K. Chan TR number assignment date 2 August 2007 TR posted 3 August 2007
With the increase in popularity of mobile devices, there has been a significant rise in mobile related security problems. The biggest threat for a mobile subscriber is lost or stolen device, which can lead to confidential data leakage, identity theft, misuse, impersonation, and high service charges. A significant amount of time may elapse between losing a device and disabling it through the service provider, during which an unauthorized malicious user may gain access and incur severe damage. We propose a probabilistic approach to spatio-temporal anomaly detection and evaluate smoothing techniques for sparse data. Our approach outperforms Markov Chain in experiments with a mobile phone dataset comprising over 500,000 hours of real data. Results indicate that our approach can effectively and efficiently detect device abnormalities for location, time, or both.
CS-2007-03 (256 pages) (pdf)
Title Malicious Mobile Code Related Experiments with an Extensible Network Simulator Authors Attila Ondi Contact Email Address Faculty Sponsor Richard Ford TR number assignment date 10 October 2007 TR posted 12 October 2007
The automated spread of worms such as Code-Red, SQL/Slammer, and Nimda have caused costly problems to computers connected to the Internet. Even users whose machines were not vulnerable to these threats suffered a loss of productivity and experienced great frustration as connectivity and network traffic were negatively impacted during outbreaks. Although the number of new worm attacks reported in the media seems to be declining, it is vital that researchers study the effects of malicious code on the global network to understand how to defend against future threats. The choice of system for studying the spread of worms and viruses in this work was Hephaestus, a discrete-event network simulator, developed during the course of this dissertation. Several experiments on self-replicating malicious computer code including the validation of the simulator through a study of the spread of Code-Red, efficiently defending against email-based worms, and distributing policy information in an enterprise network have been performed. This dissertation reports the results of these experiments as well as a theoretical insight concerning spread metrics and how the damage caused by malicious code should be measured.
CS-2007-04 (256 pages) (pdf)
The overarching goal of the current thesis is to pave the road towards a comprehensive solution to the decades old problem of integrating databases and programming languages. For this purpose, we propose a record calculus as an extension of an ML-style functional programming language core. In particular, we describe:
- a set of polymorphic record operations that are expressive enough to define the operators of the relational algebra;
- a type system together with a type inference algorithm, based on the theory of qualified types, to correctly capture the types of said polymorphic record operations;
- an algorithm for checking the consistency (satisfiability of predicates) of the inferred types;
- an algorithm for improving and simplifying types; and
- an outline of an approach to explaining type errors in the resulting type system in an informative way.