Main Nav

EDUCAUSE Security Conference: Incident Tracking and Reporting

Incident Tracking and Reporting
Kathy Bergsma, University of Florida
Joshua Beeman, University of Pennsylvania
2007 EDUCAUSE Security Professionals Conference
Thursday, April 12, 2007
Denver, CO
Kathy Bergsma reported on the UFL environment.

UFL has more than 50K students and is decentralized.
The first thing UFL tracks is the current contacts for security incident reporting.
It includes network managers, server managers, information security managers and administrators and others.
UFL has created an incident response standard that describes 8 response steps from discovery to resolution, establishes an incident response team, defines team and unit responsibilities, and sets up specific procedures for different types of incidents. It is available online at
What UFL tracks:
  • incident identification sources such as IDS (Intrusion Detection System), Email abuse complaints, flow data, and honeypots (decoys)
  • critical elements such as IP address, unit, type, severity, containment and resolution times
Various options and tools are available for ticket creation when incidents are identified and the UFL incident response team receives daily reports on open tickets. In addition, bi-weekly automated reminders for open tickets are sent to their owners. The centralized unit enters a ticket from the point of discovery via IDS (currently using Dragon but switching to Snort)   The decentralized unit has access to enter updates on to the ticket thereafter. Everything is done via the web.
Vulnerability detection is done with continuous Nessus top-20 scans and the results are tracked in SQL.   They are able to find the weak spots in their systems and compare data from year to year. The hardware for this is distributed across three machines and takes up to 3 days for a complete scan.
Individual unit reports are generated each semester that compare the unit to the 5 most active units in regard to number of incidents, number of incidents adjusted for unit size, average number of days to contain incidents, number of critical vulnerabilities, and number of critical vulnerabilities adjusted for unit size. No unit wants to be in the top 5 group which are highlighted in bright primary colors that draw attention to their security issues. The report also posts the number of each incident type and the comparison to the previous semester. The incident reports process is semi-automated and they have web tools to do the graphs. [a sample of the reports with their associated bar charts are available in the presentation slides posted online at]
A report to the CIO is generated that lists all campus units. The report shows the number of incidents, the containment time, and the number of vulnerabilities. 
Bergsma reports that 100% of campus units surveyed find these reports to be incredibly useful and that 46% made changes to their program as a result of the reports. The decentralized units use the reports for
  • Compliance reviews
  • Risk assessment
  • Strategic planning
  • Business planning
They also surveyed for incident change causes, familiarity with the UFL policy, and the degree of compliancy.
UFL does not keep actual forensics on tickets but they do make a forensics report and put it in the incident safe.
Joshua Beeman reported on the Penn environment.
Penn is running an open network with decentralized computing (40 cost centers) on a limited budget for 22K students and 17K faculty and staff. They have growing security concerns as did everyone else in the room.   He indicated that some systems are managed/coordinated centrally.
Their security reports are generated for
  • Awareness
  • Identifying larger trends
  • Developing “security hawks”
  • Ultimately improving customer service and justifying their existence
Beeman characterized Version 1 as “gum and duct tape” at which point an attendee asked: “You have duct tape?”   Version 2 was characterized as “less gum and more tape” after significant feedback from users.   He did say that they planned ultimately to shift to Remedy and may use some of the UFL scripts.
Before Version 1 the primary tracking system was via email so when you created reports you had to go back through email to collect information. One person did use a paper system and an excel spreadsheet was used.
In Version 1 of the current reporting system they log incidents with the following information:
  • Date
  • IP address
  • Center names
  • Incident sources
  • Incident type
  • Handler comments are optional
Compromise key elements are
  • Total number of compromises
  • Total number of IP addresses
  • Ratio of compromises/IPs (this is their magic #)
  • Ranking based on ratio
  • Average based on ratio
Whereas UFL concentrates on the top 5, Penn does all 40. Their cost centers all want a better score and come bugging them for assistance.
Critical hosts
  • Total number of critical hosts registered
  • Total number of IP addresses
  • Ratio of critical hosts/Ups
  • Ranking
  • Average
Beeman said that if a critical host doesn’t register with them that unit is “in trouble” but that there is no real consequence if they don’t have an incident and make the news. The cost centers are ultimately responsible.
Key elements of the management reports are:
  • Summary tables with compromise & critical host rankings
  • Summary graphs with incident source and overall distribution.
Beeman noted that the system alerts you to the fact that you are entering an incident on a critical host by turning red.
Defined criteria in the beginning and reactively? There is a distinction between incidents and events, and they found a need to add “non-event,” thus modifying as it was used. Fluidity in “what type of incident” [DMCA vs Vulnerability vs Compromise vs non-event] has been important but reporters sometimes use their own language to describe a specific incident. 
Each cost center receives a copy of their report which is detailed. However, Beeman said they are only truly interested in being at the top – incident free, and may not pay attention to the details. The report has a graph that clearly shows the top cost centers and pie charts were created that show progress in “proactive” incident identification. Sample reports were included in the presentation slides.
The gum and duct tape version was very successful and Beeman’s unit received additional funding to build a cold fusion database which is Version 2.
GRADI is their Version 2 web based incident tracking system. It captures all of the previous fields plus many more and it provides automated processes for such things as DNS & host contact lookup, email routing, and custom handling based on incident type.
New elements in the system were suggested by their users and are:
  • Wireless, wired
  • DMCA – non-DMCA
  • Critical vulnerabilities
  • New management reports
  • Comparative studies
The Version 2 report is two pages long and has all the summaries on the top sheet for easy viewing. Samples of these were in the presentation slides.
Version 2 provides
  • Tools and data for senior management
  • Increased security awareness
  • Identification of general trends and problem areas
  • Improvement of the university’s overall security posture
and it created security “hawks” in the field.
Beeman closed by reminding us the Version 1 was based on an individual spreadsheet with five data fields.
The two sets of presentation slides for this session are located at

Tags from the Community