Main Nav

We are thinking about changing our network architecture.

 

As our network has grown and the complexity of our public facing systems and connectivity needs of those systems has increased, we are wondering what value our DMZ delivers. 

 

As an example, public facing systems in the DMZ that require access to LDAP/AD for AAA, SQL for database lookups, Exchange for mail delivery and relay, etc.

 

For those of you with non-trivial public facing systems, where do you draw the balance line between security and access?  If our most visible public facing systems (most likely to be attacked) require internal AAA & SQL access, what are we protecting? 

 

Given current system requirements and the evolution of security, are the reasons for setting up a DMZ 15 years ago still valid, and is the value of maintaining a DMZ worth the associated costs and if not, what are the alternatives? 

 

 

Thanks.

Jason Youngquist, CISSP

Information Technology Security Engineer

Technology Services

Columbia College

1001 Rogers Street, Columbia, MO  65216

(573) 875-7334

jryoungquist@ccis.edu

http://www.ccis.edu

Comments

Message from mail@jeffmoore.com

Security is like an Onion or an Ogre. It has many layers(End of horrible Shrek reference! Sorry its been a long week.).

For us the layers are allot of overhead but I feel they are very very valuable. Another thing to consider in your questioning is the changing landscape of firewalls. If I were asking this question of our network I would rephrase to say "Are NGFW DMZs relevant in our architecture?". The part these play can be very different than a traditional firewall. Even if there was not a deny any stance I would certainly think it would be beneficial in controlling traffic. Threats etc. Minimizing targets and areas affected is a major benefit. Also allot of attacks are utilizing social engineering today to target a random person at Company A to get access to their network. Without layers those types of attacks can easily gain access.

Take all this with a grain of salt cause I am still livin in the 90s!!!


Jeff M



Hi, Columbia University has been running an open network for many years - if you search for my name and educause, you will find some of my presentations - you can get a good idea of how we do it from them Enjoy! Joel Rosenblatt Joel Rosenblatt, Director, Network & Computer Security Columbia Information Security Office (CISO) Columbia University, 612 W 115th Street, NY, NY 10025 / 212 854 3033 http://www.columbia.edu/~joel Public PGP key http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x90BD740BCC7326C3 --On Thursday, August 30, 2012 9:09 PM +0000 "Youngquist, Jason R." wrote: > We are thinking about changing our network architecture. > > > As our network has grown and the complexity of our public facing systems and connectivity needs of those systems has increased, we are wondering what value > our DMZ delivers. > > > > As an example, public facing systems in the DMZ that require access to LDAP/AD for AAA, SQL for database lookups, Exchange for mail delivery and relay, etc. > > > > For those of you with non-trivial public facing systems, where do you draw the balance line between security and access? If our most visible public facing > systems (most likely to be attacked) require internal AAA & SQL access, what are we protecting? > > > > Given current system requirements and the evolution of security, are the reasons for setting up a DMZ 15 years ago still valid, and is the value of > maintaining a DMZ worth the associated costs and if not, what are the alternatives? > > > > > > Thanks. > > Jason Youngquist, CISSP > > Information Technology Security Engineer > > Technology Services > > Columbia College > > 1001 Rogers Street, Columbia, MO 65216 > > (573) 875-7334 > > jryoungquist@ccis.edu > > http://www.ccis.edu Joel Rosenblatt, Director, Network & Computer Security Columbia Information Security Office (CISO) Columbia University, 612 W 115th Street, NY, NY 10025 / 212 854 3033 http://www.columbia.edu/~joel Public PGP key http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x90BD740BCC7326C3
Message from hhoffman@ip-solutions.net

Heya Jason, Our mantra has always been: "Each host on our network must be able to protect itself" and so we don't have a DMZ. Every host is meant to be running a host based firewall that allows for specific services to be accessible from predetermined locations. That doesn't mean that having backup access controls in place is a bad thing. Cheers, Harry On 08/30/2012 05:09 PM, Youngquist, Jason R. wrote: > We are thinking about changing our network architecture. > > > > As our network has grown and the complexity of our public facing systems > and connectivity needs of those systems has increased, we are wondering > what value our DMZ delivers. > > > > As an example, public facing systems in the DMZ that require access to > LDAP/AD for AAA, SQL for database lookups, Exchange for mail delivery > and relay, etc. > > > > For those of you with non-trivial public facing systems, where do you > draw the balance line between security and access? If our most visible > public facing systems (most likely to be attacked) require internal AAA > & SQL access, what are we protecting? > > > > Given current system requirements and the evolution of security, are the > reasons for setting up a DMZ 15 years ago still valid, and is the value > of maintaining a DMZ worth the associated costs and if not, what are the > alternatives? > > > > > > Thanks. > > Jason Youngquist, CISSP > > Information Technology Security Engineer > > Technology Services > > Columbia College > > 1001 Rogers Street, Columbia, MO 65216 > > (573) 875-7334 > > jryoungquist@ccis.edu > > http://www.ccis.edu >
On 30 Aug 2012, at 20:09 , Harry Hoffman wrote: > Heya Jason, > > Our mantra has always been: "Each host on our network must be able to > protect itself" and so we don't have a DMZ. Every host is meant to be > running a host based firewall that allows for specific services to be > accessible from predetermined locations. Harry, that sounds nice, but you have no extra control there. While I would love to see a host-based layer everywhere, going from managing two firewall rulesets to several hundred firewalls is far beyond our capability. > On 08/30/2012 05:09 PM, Youngquist, Jason R. wrote: >> We are thinking about changing our network architecture. >> >> As our network has grown and the complexity of our public facing systems >> and connectivity needs of those systems has increased, we are wondering >> what value our DMZ delivers. A DMZ very much still provides value IMO. I sleep better knowing I'm not relying on one host-based config controlled by N server admins to prevent constituents (or the innernets) from connecting to our Oracle databases, our identity solutions, etc. While this doesn't provide me much intra-network control, by separating networks appropriately and putting hard borders between, I can make sure most interesting server interactions end up inter-network and thereby cross one or more well-controlled borders. >> As an example, public facing systems in the DMZ that require access to >> LDAP/AD for AAA, SQL for database lookups, Exchange for mail delivery >> and relay, etc. A DMZ doesn't need to be a black-hole to provide value and protection. I'd rather maintain a perimeter around that network, very tightly control egress, and other perimeters around the other networks where I tightly control ingress, rather than put those servers inside my one and only server perimeter, have little control over both ingress and egress, and rely on individual host-based firewall configs all around. Unless we get 3X more server admins, then perhaps I'd switch. But that's not cheaper or more efficient from where I sit. >> For those of you with non-trivial public facing systems, where do you >> draw the balance line between security and access? If our most visible >> public facing systems (most likely to be attacked) require internal AAA >> & SQL access, what are we protecting? Uh, all of those internal systems that the internet should not talk to? With a "non-trivial" exposure, it becomes that much more important to define and maintain those lines, lest you wind up trying to unwind and map out the world largest twine ball. >> Given current system requirements and the evolution of security, are the >> reasons for setting up a DMZ 15 years ago still valid, and is the value >> of maintaining a DMZ worth the associated costs and if not, what are the >> alternatives? Yes, I think so. Cloud/hosted services affects the calculation, but it's certainly not out the window. Would you give up using antivirus software? Passwords? -jth
On Aug 30, 2012, at 16:09 , Youngquist, Jason R. wrote: > > Given current system requirements and the evolution of security, are the reasons for setting up a DMZ 15 years ago still valid, and is the value of maintaining a DMZ worth the associated costs and if not, what are the alternatives? We never did a full-blown DMZ. Firewalls are deployed where needed and/or required, but everything else is just out on public IP space and not firewalled. A border firewall of some sorts will likely be in our future, but we will not be doing a complete re-architecture of our network to accommodate it. -- Julian Y. Koh Manager, Network Transport, Telecommunications and Network Services Northwestern University Information Technology (NUIT) 2001 Sheridan Road #G-166 Evanston, IL 60208 847-467-5780 NUIT Web Site: PGP Public Key:
I'm a fan of border firewalls when the border can be drawn around the application servers and the stored data that warrant a serious level of protection that can be defined in terms of allowed protocol set. If you twist my arm, maybe I can also include expected community of users by network address as a poor stand-in for expected community of people, but I'd rather handle that part by strong authentication and additional Identity and Access Management infrastructure. I'm less a fan of borders in some other situations, particularly when the idea is to draw it around a large enterprise such as a big university. The conceptual problem I have is that we are seeing huge growth in personally owned high function mobile devices that connect over both enterprise wireless networks and carrier 3G/4G networks. The same user on the same device would be "inside" one moment and "outside" the next, and may spend substantial time on other networks such as home networks or coffee shop networks where they can quickly go from clean to compromised. All my instincts tell me that enterprise borders are less helpful, and that I want our focus to be on placing well-designed protection very close to the resources (data, app servers) we want to protect and to treat all else as public and untrusted, even if a device happens to have an IP address at the moment that "belongs" to the University. I'm a fan of open networks, closed servers, protected sessions.
Message from ena@tc.columbia.edu

One can understand why the network gurus say we shouldn't do elaborate firewalling at the network level, but rather  close down the hosts. If a department has one or two servers, fine, let them be responsible for locking it down. If the IT dept has 250 servers managed by 3 or 4 admins, then what? Are any of your server admin teams happy with a system for managing the "personal firewall" on each server? Can you set it locally and forget it every time you deploy a new server? Don't your port requirements change as ours do when there's an app upgrade or a middleware upgrade, etc.?

Some days it seems as though it's really about manageability.

V. Ena Haines
Director of Information Technology
Teachers College, Columbia University
525 West 120th Street
New York, NY
10027
V: 212-678-3486
F: 212-678-3243



Side thread on the topic; apologies. 

 

We’ve looked a bit at Microsoft’s Server and Domain Isolation and DirectAccess systems, which offer the promise of some of this host-firewall/control close to the server. 

 

Has anyone ever had a conversation with, say an internal or other IT-audit group about separation of duties issues with server (or Active Directory) admins controlling network traffic policy, instead of a separate network or Infosec group?

 

   -jml

 

From: The EDUCAUSE Security Constituent Group Listserv [mailto:SECURITY@LISTSERV.EDUCAUSE.EDU] On Behalf Of Haines, Ena
Sent: Thursday, September 06, 2012 12:54 PM
To: SECURITY@LISTSERV.EDUCAUSE.EDU
Subject: Re: [SECURITY] Rethinking the DMZ

 

One can understand why the network gurus say we shouldn't do elaborate firewalling at the network level, but rather  close down the hosts. If a department has one or two servers, fine, let them be responsible for locking it down. If the IT dept has 250 servers managed by 3 or 4 admins, then what? Are any of your server admin teams happy with a system for managing the "personal firewall" on each server? Can you set it locally and forget it every time you deploy a new server? Don't your port requirements change as ours do when there's an app upgrade or a middleware upgrade, etc.?

 

Some days it seems as though it's really about manageability.


V. Ena Haines

Director of Information Technology

Teachers College, Columbia University

525 West 120th Street

New York, NY
10027

V: 212-678-3486

F: 212-678-3243



Message from mike.caudill@duke.edu

Hi Ena,

The problem with the concentric circles approach is that once you get past the firewall, without other layered security protections one "trusted" host can easily attack and infect another "trusted" host.  And if you look at the statistics on what AV software is actually able to catch, it does not even come close to being 100% effective.  A perimeter firewall can perform some useful functions, but can also introduce problems as well.  

You really need to have a more thorough approach of network, application and security baselines, host-based firewalls, network firewalls, log analysis, netflow analysis, and others.  There are no silver bullets here no matter what your vendors may tell you.  No firewall or IPS will catch everything.  And then if you do too much from a central choke point then you end up with a config that no one fully understands or wants to ever touch for fear that something might break.

The piece that you did hit on though was scalability and manageability.  Whatever the architecture you go with, make sure that it is both scalable and manageable.  There are products out there that will help visualize your networks and model proposed changes which can help in managing a complex set of rules across multiple devices.

Mike Caudill
Assistant Director, Cyber Defense and Response
Duke Medicine
Phone: +1-919-668-2144 / +1 919-522-4931 (cell)


From: <Haines>, Ena <ena@TC.COLUMBIA.EDU>
Reply-To: The EDUCAUSE Security Constituent Group Listserv <SECURITY@LISTSERV.EDUCAUSE.EDU>
Date: Thursday, September 6, 2012 1:53 PM
To: "SECURITY@LISTSERV.EDUCAUSE.EDU" <SECURITY@LISTSERV.EDUCAUSE.EDU>
Subject: Re: [SECURITY] Rethinking the DMZ

One can understand why the network gurus say we shouldn't do elaborate firewalling at the network level, but rather  close down the hosts. If a department has one or two servers, fine, let them be responsible for locking it down. If the IT dept has 250 servers managed by 3 or 4 admins, then what? Are any of your server admin teams happy with a system for managing the "personal firewall" on each server? Can you set it locally and forget it every time you deploy a new server? Don't your port requirements change as ours do when there's an app upgrade or a middleware upgrade, etc.?

Some days it seems as though it's really about manageability.

V. Ena Haines
Director of Information Technology
Teachers College, Columbia University
525 West 120th Street
New York, NY
10027
V: 212-678-3486
F: 212-678-3243



Message from david.byers@liu.se

Whether you have perimeter protection or not does not greatly impact the need for protection on each host. Chances are pretty good that eventually something inside your perimeter will become a malware-infested zombie, attacking anything and everything it can -- and your typical border firewall will sit there, oblivious. The wider your perimeter, the more likely this is to happen. So firewalling at the network level or no, you still need to lock down the hosts. Locking down the hosts doesn't necessarily mean deploying a "personal firewall". It could (and should) first and foremost mean ensuring that all accessible services are secure, and that only those services that need to be running, are running. Do that right, and the personal firewall becomes much simpler. This can be done with hundreds or thousands of servers. It's not *that* hard. It helps to have good configuration management tools, and a reasonable change control process in place. But even without that, it's doable with a fairly large number of servers (we manage pretty well). I am, by the way, not a network guru -- I'm first and foremost a security person. And I think that "defense in depth" are words to live by in the IT security domain. -- David Byers Head of Division Networking, IRT and Telephony Linköping University Sweden On 09/06/2012 07:53 PM, Haines, Ena wrote: > One can understand why the network gurus say we shouldn't do elaborate > firewalling at the network level, but rather close down the hosts. If > a department has one or two servers, fine, let them be responsible for > locking it down. If the IT dept has 250 servers managed by 3 or 4 > admins, then what? Are any of your server admin teams happy with a > system for managing the "personal firewall" on each server? Can you > set it locally and forget it every time you deploy a new server? Don't > your port requirements change as ours do when there's an app upgrade > or a middleware upgrade, etc.? > > Some days it seems as though it's really about manageability. > > /V. Ena Haines/ > /Director of Information Technology/ > /Teachers College, Columbia University/ > /525 West 120th Street/ > /New York, NY > 10027/ > /V: 212-678-3486/ > /F: 212-678-3243/ > > > >
On 9/6/2012 2:24 PM, Mike Caudill wrote:
Hi Ena,

The problem with the concentric circles approach is that once you get past the firewall, without other layered security protections one "trusted" host can easily attack and infect another "trusted" host.  And if you look at the statistics on what AV software is actually able to catch, it does not even come close to being 100% effective.  A perimeter firewall can perform some useful functions, but can also introduce problems as well. 

That's the classic "onion" model.  I prefer the "garlic" model...  separate layered cloves of application areas with their own common core, wrapped around a common infrastructure.  We do this internally with VRFs (minimizes the collateral damage of a single compromised host to the container).

All my instincts tell me that enterprise borders are less helpful, and that I want our focus to be on placing well-designed protection very close to the resources (data, app servers) we want to protect and to treat all else as public and untrusted, even if a device happens to have an IP address at the moment that "belongs" to the University.

http://www.internetworldstats.com/stats.htm says the Dec 31 2011 internet user population was 2,267,233,742.  Should they all have access to your front door?  Can they all try the lock?


I'm a fan of open networks, closed servers, protected sessions.

Using 1918 addresses internally with a default-deny policy eliminates the knocking potential to everything except your static-NAT'ed servers.  That's a whale of a risk exposure reduction with a simple perimeter firewall :)

Now port-restrict the openings at the perimeter, or as discussed earlier, run the public-facing side through an F5/load-balancer/firewall to get to a further closed back-end for even more reduction.

Or join the other lemmings and throw it all up in the cloud and let someone else worry about it :)  Just be sure to duck the auditors :)

Jeff
Message from mike.caudill@duke.edu

Using 1918 addresses internally with a default-deny policy eliminates the knocking potential to everything except your static-NAT'ed servers.  That's a whale of a risk exposure reduction with a simple perimeter firewall :)

Only so long as you have your garlic model.  Much of the malware distribution today is a come and fetch it model.  Even though that attacks occur constantly on HTTP, SSH, RDP and other ports, many forms of malware now get on hosts without a malicious machine in a foreign country trying to break in with a drive by attack. Your suggested approach of compartmentalization for internal hosts is more important than just using RFC1918 addresses and believing that private addressing alone buys you large increases in security.  All that RFC1918 address does is require a different exploitation avenue or an additional step for the attacker.

-Mike-


Mike Caudill
Assistant Director, Cyber Defense and Response
Duke Medicine
Phone: +1-919-668-2144 / +1 919-522-4931 (cell)


From: Jeff Kell <jeff-kell@utc.edu>
Date: Thursday, September 6, 2012 2:51 PM
To: The EDUCAUSE Security Constituent Group Listserv <SECURITY@LISTSERV.EDUCAUSE.EDU>
Cc: Mike Caudill <mike.caudill@dm.duke.edu>
Subject: Re: [SECURITY] Rethinking the DMZ

On 9/6/2012 2:24 PM, Mike Caudill wrote:
Hi Ena,

The problem with the concentric circles approach is that once you get past the firewall, without other layered security protections one "trusted" host can easily attack and infect another "trusted" host.  And if you look at the statistics on what AV software is actually able to catch, it does not even come close to being 100% effective.  A perimeter firewall can perform some useful functions, but can also introduce problems as well. 

That's the classic "onion" model.  I prefer the "garlic" model...  separate layered cloves of application areas with their own common core, wrapped around a common infrastructure.  We do this internally with VRFs (minimizes the collateral damage of a single compromised host to the container).

All my instincts tell me that enterprise borders are less helpful, and that I want our focus to be on placing well-designed protection very close to the resources (data, app servers) we want to protect and to treat all else as public and untrusted, even if a device happens to have an IP address at the moment that "belongs" to the University.

http://www.internetworldstats.com/stats.htm says the Dec 31 2011 internet user population was 2,267,233,742.  Should they all have access to your front door?  Can they all try the lock?


I'm a fan of open networks, closed servers, protected sessions.

Using 1918 addresses internally with a default-deny policy eliminates the knocking potential to everything except your static-NAT'ed servers.  That's a whale of a risk exposure reduction with a simple perimeter firewall :)

Now port-restrict the openings at the perimeter, or as discussed earlier, run the public-facing side through an F5/load-balancer/firewall to get to a further closed back-end for even more reduction.

Or join the other lemmings and throw it all up in the cloud and let someone else worry about it :)  Just be sure to duck the auditors :)

Jeff
David Byers commented: #Whether you have perimeter protection or not does not greatly impact the #need for protection on each host. Chances are pretty good that #eventually something inside your perimeter will become a #malware-infested zombie, attacking anything and everything it can -- and #your typical border firewall will sit there, oblivious. The wider your #perimeter, the more likely this is to happen. In a higher education context, this is what I call the "20,000 of your closest friends" problem (slide 56 of http://pages.uoregon.edu/joe/architectures/architecture.pdf ), e.g., a perimeter firewall at even a mid-size university can result in a population of "trusted insiders" (users and/or hosts) bigger than some small cities :-; #So firewalling at the network level or no, you still need to lock down #the hosts. Precisely. #Locking down the hosts doesn't necessarily mean deploying a "personal #firewall". It could (and should) first and foremost mean ensuring that #all accessible services are secure, and that only those services that #need to be running, are running. Do that right, and the personal #firewall becomes much simpler. Again, this is exactly right in my opinion. Regards, Joe
On Thu, Sep 06, 2012 at 01:53:48PM -0400, Haines, Ena wrote: > If the IT dept has 250 servers managed by 3 or 4 admins, then what? > Are any of your server admin teams happy with a system for managing > the "personal firewall" on each server? Can you set it locally and > forget it every time you deploy a new server? Don't your port > requirements change as ours do when there's an app upgrade or a > middleware upgrade, etc.? > > Some days it seems as though it's really about manageability. I don't run 250 systems, it's closer to 25, but I easily manage the firewall rulesets on multiple servers centrally with puppet. Every service that needs a port opened pushes out a coresponding '.rules' file that gets dropped in /etc/firewall.d/. Since I set this up I haven't had to touch the firewall ruleset on an individual machine. -- -- Justin Azoff -- Network Security & Performance Analyst
Message from hhoffman@ip-solutions.net

With a combination of GPO under Windows, and SSH/{puppet/cfengine/bcfg2}/Func/etc for *nix it becomes pretty easy to manage large numbers of systems in a reasonable manner. I would suspect that Justin's 25 servers per person might be on the low end in our environments. If you are running that many systems and aren't using something for configuration management you've probably already run afoul of many things, PCI being at least one of them. Cheers, Harry On 09/06/2012 03:42 PM, Justin Azoff wrote: > On Thu, Sep 06, 2012 at 01:53:48PM -0400, Haines, Ena wrote: >> If the IT dept has 250 servers managed by 3 or 4 admins, then what? >> Are any of your server admin teams happy with a system for managing >> the "personal firewall" on each server? Can you set it locally and >> forget it every time you deploy a new server? Don't your port >> requirements change as ours do when there's an app upgrade or a >> middleware upgrade, etc.? >> >> Some days it seems as though it's really about manageability. > > I don't run 250 systems, it's closer to 25, but I easily manage the > firewall rulesets on multiple servers centrally with puppet. Every > service that needs a port opened pushes out a coresponding '.rules' file > that gets dropped in /etc/firewall.d/. > > Since I set this up I haven't had to touch the firewall ruleset on an > individual machine. >
Haines, Ena wrote: > One can understand why the network gurus say we shouldn't do elaborate > firewalling at the network level, but rather close down the hosts. If a > department has one or two servers, fine, let them be responsible for > locking it down. If the IT dept has 250 servers managed by 3 or 4 admins, > then what? Are any of your server admin teams happy with a system for > managing the "personal firewall" on each server? Can you set it locally and > forget it every time you deploy a new server? Don't your port requirements > change as ours do when there's an app upgrade or a middleware upgrade, etc.? > > Some days it seems as though it's really about manageability. Very true. Though if there are thousands of access rules that have to keep pace with changes in hardware, software, service request, and threat changes, manageability is going to be a problem no matter who does it. The more granular the access controls, the more overhead. The systems administrators have the advantage of being more knowledgeable about their systems and change plans than a network or security administrator. If they don't, or if an organization's leadership feels more comfortable with a third party controlling network access, then the systems folks will have to constantly interact with the network/security folks for access changes. A hybrid solution may consist of the network/security administrators controlling access into a sub-network using network firewalls and the system administrators controlling network access from adjacent systems in the same sub-network using host firewalls. This distributes the administrative overhead, provides some separation of network access control duties, and gives the systems administrators some autonomy to make changes as needed. -- Gary Flynn Security Engineer James Madison University
Message from hhoffman@ip-solutions.net

Heh, yeah... looking at my netmask... a /17 that puts it at 32k of my closest friends. In some ways it's kind of nice. No need to wait on that span port, with so much traffic the switches are always rebroadcasting and watching arp traffic makes it easy to see scanners :-) But yeah, each host should be prepared to protect itself. In my opinion that does mean running a firewall on every system, but that becomes much easier with various built in options as well as 3rd party apps. On 09/06/2012 03:18 PM, Joe St Sauver wrote: > David Byers commented: > > #Whether you have perimeter protection or not does not greatly impact the > #need for protection on each host. Chances are pretty good that > #eventually something inside your perimeter will become a > #malware-infested zombie, attacking anything and everything it can -- and > #your typical border firewall will sit there, oblivious. The wider your > #perimeter, the more likely this is to happen. > > In a higher education context, this is what I call the "20,000 of your > closest friends" problem (slide 56 of > http://pages.uoregon.edu/joe/architectures/architecture.pdf ), e.g., a > perimeter firewall at even a mid-size university can result in a population > of "trusted insiders" (users and/or hosts) bigger than some small cities :-; > > #So firewalling at the network level or no, you still need to lock down > #the hosts. > > Precisely. > > #Locking down the hosts doesn't necessarily mean deploying a "personal > #firewall". It could (and should) first and foremost mean ensuring that > #all accessible services are secure, and that only those services that > #need to be running, are running. Do that right, and the personal > #firewall becomes much simpler. > > Again, this is exactly right in my opinion. > > Regards, > > Joe >

Our model is for all inter-subnet traffic to traverse the firewalls, even if the firewall is just passing the traffic w/o anything other than an “ALLOW/PERMIT” rule.  The FW also acts as an IDS/IPS and has Data Loss Prevention/Protection capabilities.  This model, while throwing A LOT of traffic through the FW, allows a good deal of visibility for us on the network when things do to wrong.

 

As far as rule bases, most of our rules deal with inbound traffic and/or data center traffic.  By using a NGFW, we only allow certain applications to hit our servers.  This keeps rogue processes from being spawned, in the event that something is compromised.  In essence, the whole network in a DMZ.

 

Security, by its very nature, is supposed to be a paranoid topic.  Even if we’re being liberal about the statistics, we’d be 50/50 on internal/external attacks, etc.  I think the idea of a border-only FW is not much of a security measure as it is a minimal security effort.  The whole Zero Trust Networking principal that (I think) Forrester Research is pushing is something we subscribe to.  We have had discussions about moving to a perimeter firewall and controlling internal traffic via ACL’s.  This would remove IDS/IPS functionality and create a huge overhead.  Central (or minimal) management is key for quick response .. whether that response is to a FW change request, or tracking down malicious activity.

 

-Brian Helman

 

Close
Close


Annual Conference
September 29–October 2
Register Now!

Events for all Levels and Interests

Whether you're looking for a conference to attend face-to-face to connect with peers, or for an online event for team professional development, see what's upcoming.

Close

Digital Badges
Member recognition effort
Earn yours >

Career Center


Leadership and Management Programs

EDUCAUSE Institute
Project Management

 

 

Jump Start Your Career Growth

Explore EDUCAUSE professional development opportunities that match your career aspirations and desired level of time investment through our interactive online guide.

 

Close
EDUCAUSE organizes its efforts around three IT Focus Areas

 

 

Join These Programs If Your Focus Is

Close

Get on the Higher Ed IT Map

Employees of EDUCAUSE member institutions and organizations are invited to create individual profiles.
 

 

Close

2014 Strategic Priorities

  • Building the Profession
  • IT as a Game Changer
  • Foundations


Learn More >

Uncommon Thinking for the Common Good™

EDUCAUSE is the foremost community of higher education IT leaders and professionals.