I've been building networks for the majority of my life at this point, and during most of that time little has changed in terms of basic network architecture. And yet, it seems that newer, faster, and "better" networking components and services are allowing network designs to deviate from the tried and true. In many cases, that's a really bad idea.
I'm showing my gray hair here, but I'll try to avoid any reference to the kids on my lawn or the onion on my belt.
[ Get expert networking how-to advice from InfoWorld's Networking Deep Dive PDF special report. | Stay up to date on the lighter side of tech goings-on with our Notes from the Underground newsletter. ]
Take a traditional LAN/WAN network for a medium-sized business. Back in the day you'd have a firewall with an external, internal, and a DMZ interface; internal LAN switching; and a few routers driving point-to-point or frame-relay networks to other sites. All the Internet traffic flowed through the headquarters firewall, so there was a single point of egress. If there were backup links, they were likely to be ISDN lines at each site with a terminal server at HQ to call them up if necessary.
The DMZ network was contained on a separate switch, and the various servers on the DMZ were physically connected to that, which was physically connected to the firewall's DMZ interface. Internet connectivity for the whole shebang was one or more T1s with multilink PPP or maybe a fractional T3.
Compared to today, it's a very simple setup. It's also very secure: One point of entry and exit plus physical separation of untrusted networks. And it's simple to trace and fix problems.
Today, that model is withering in the face of mixed-medium bandwidth delivery, realistic remote-office VPN scenarios, and the lack of physical separation. Let's design that same network today. (Remember, this is not how I'd do it, but how I've seen many designed recently.)
At HQ, there's an asynchronous business-class cable service in place for basic Internet browsing, and a synchronous fiber link in place for production business traffic. Both are firewalled separately -- and the firewall on the fiber link has several DMZ networks, all of which are plugged into the same switch, which is cut into non-routable VLANs but trunked to the network core in order to facilitate the array of virtual servers that need presence on those DMZs.
The remote offices may be fortunate enough to be in the service area of the same fiber provider, so they connect back to HQ via an AES256 VPN handled by the main firewalls. Offices in other areas may still be connected via T1 or fractional T3, smaller sites by standard VPN. There are no backup lines because the cost is prohibitive and ISDN won't provide anywhere near the bandwidth required to run those offices today. Each office peels out a portion of their fiber connection for Internet connectivity, so Internet traffic doesn't flow back to HQ. This requires that Internet policy maintenance be implemented at each site, not just HQ, which can be costly depending on the type of solution in place.
In this scenario there is no physical separation of anything; it's all controlled within the firewalls and switches. The core switches are carrying trusted and untrusted traffic and in places trunking that traffic to virtualization hosts, edge switches, and so forth.
From a convenience standpoint, this might make sense. If your admins are absolutely grade-A, top-notch, it might make sense. For everyone else, it can quickly turn into a disaster as even a casual attitude regarding switchport assignments and allowed VLANs on trunks can open up security holes the size of volcanoes. Lacking that physical separation, mixing trusted and untrusted networks can be a nightmare if you're not careful, and a nightmare even if you are. Just because you can do something doesn't mean you should.
But the cost savings can't be argued with. Terminating remote-site VPNs on the same device that also controls local DMZs and Internet access has significant benefits these days, especially if those remote sites can play on the same provider network. Suddenly, a high-speed WAN is as cheap as dirt. It's also necessarily more complex from a configuration and administration standpoint. That doesn't mean don't do it; it means do it right.
The moral of this story is that even though the transports are different, the overall architecture shouldn't vary: Physical separation of trusted and untrusted networks should be sacrosanct.
Now get off my lawn!