gulogo.gif  
 
1. Hiatus
2. RIP, Satoru Iwata
3. Let there be Robot Battles
4. Regarding pixel art!
5. 16-bit Star Wars
6. Goodbye, Spock.
7. James Randi Retires
8. More Star Wars on GOG
9. Archive.org gives you DOS Games
10. Ralph Baer, RIP.
1. Quickie: Impressions June 2014
2. Quickie: Penny Arcade Episode 3
3. Quickie: The Amazing Spider-Man
4. Quickie: Transformers: Fall of Cybertron
5. Quickie: Prototype 2
6. Quickie: Microsoft Kinect
7. Quickie: X-Men Destiny
8. Spider-Man: Edge of Time
9. Quickie: Transformers Dark of the Moon
10. Quickie: Borderlands GOTY
1. Musings 45: Penny Arcade and The Gripping Hand
2. Movie Review: Pacific Rim
3. Movie Review: Wreck-It Ralph
4. Glide Wrapper Repository
5. Movie Review: Winnie The Pooh
6. Musings 44: PC Gaming? Maybe it's on Life Support
7. Video Games Live 2009
8. Movie Review: District 9
9. Musings: Stardock, DRM, and Gamers' Rights
10. Musings: How DRM Hurts PC Gaming
Main Menu

Affiliates
X-bit labs
The Tech Zone
Twin Galaxies

Login






 Log in Problems?
 New User? Sign Up!


Weekly Musings #8 - "Good" Viruses, Part II
Author: Michael Ahlf 
Date: August 2nd 2004
Last week I had the opportunity, not to mention clarity of purpose, to write a well-received rant on the subject of whether or not there can exist such a thing as a "good" computer virus.

Quite a few readers wrote in, mostly responding with variants on two "what if" points. I thank you all for writing in, both crowds - those who agreed with me, and those who disagreed.

Reader Jonathan Sundy put the points the most clearly. I wrote him a short reply, but the questions festered in my mind over the weekend, and I figured a Weekly Musings column is a good place to do the response proper justice.

So without further ado, Jonathan's two points, and my responses.

Point #1: Network Problems Could Be a Reasonable Price to Pay

A ďbeneficialĒ virus would possibly still be useful in the case where network traffic isnít the biggest threat.  Most viruses now are used for launching DDOS attacks so yes an anti-virus virus will only really help them out, but if there was a virus whose purpose was much more malicious on a local machine level, perhaps destroying files, or tainting mathematical data, perhaps the network load would be a reasonable price to pay for the security of everyoneís data.  Now I donít really see this being a truly good option since there is no excuse for not patching your machine, especially once a virus is on the loose, itís main benefit would be against home users who arenít as adept at keeping up to date due to the hassle or perceived lack of necessity.  Microsoftís Service Pack 2 for Windows XP should help reduce the need for such a scenario due to itís forced windows updates.

The idea has merit on a primal level. Had the payload of the Blaster worms been a hard disk format, random corruption of all files ending in .doc, or the deletion of key Windows files, people might have regarded the problems Nachi caused as negligible.

Unfortunately, it's never that simple - because we're not simply describing a worm now. We're in the land of time-bomb viruses. In this scenario, we have multiple objectives.

First, we could patch machines before they get the virus. Sounds simple enough, in principle.
Second, we could patch infected machines before the payload goes off. Again, simple enough in principle.
Third, we have the rather painful task of cleaning up after the virus if it goes off before we get there. This is the worst, and it is feasible to just ignore it in writing our counter-worm.

Unfortunately for us, with a time-bomb virus time is not even remotely on our side. The key will really be the first goal, patching machines before they get the virus. The virus writer may have written his timer so that it goes off on a certain date and time, as is the norm with early viruses (think Michelangelo) and with DDoS worms; in that case, our counter-worm COULD be feasible, but only if we instill in it propagation speeds in excess of what Nachi produced. That scenario means that any machine we infect will quickly saturate its available network bandwidth. The problem in this scenario is that we've just trashed entire networks - transmission of our counter-worm actually slows down after a few machines on a network are infected. Plus, machines trying to run the legitimate patch for this - or even the patch we're trying to download (they may be the same) - are going to have a lot of trouble getting it downloaded in the first place. The upside is that we can program our counter-worm to kill itself 24-48 hours after the target virus's deadline, provided that nobody creates an alternate version of the target with yet another deadline in it.

Alternatively - and far more sinister - the virus writer may have written his time-bomb to go off at random times, or at a set time past the infection date for a functionally random equivalent. In this scenario, our counter-worm is useless. Again we need a high-speed rate of transfer, but our trashing networks causes extra damage because machines that are infected can be made worse; unable to receive patches, the likelihood of the payload going off is actually increased.

Point #2: The Idea of the Local Worm

What about local worms.  Iím not exactly informed about enterprise management, but wouldnít this worm methodology be useful for applying fixes on large corporate networks.  As long as it was kept internal, it could easily patch all of your machines without having to be watched, and since there is a finite number of machines it would be able to remove itself on determining itís task complete.  But again, this thought comes in light of my ignorance about patch management on a large level anyway, and Iím sure such tools are already in place.

It'd work. I don't think you could find a computer scientist around who would tell you that such a methodology wouldn't work - if the worm were programmed only to attack a certain IP range, for instance, you could indeed keep it local.

On the other hand, most computer scientists, mathematicians, or programmers would tell you it's an inelegant solution and probably a bad idea, somewhat akin to duct-taping your spare tire to a flat rather than simply taking off the flat tire and replacing it with the spare.

The problem is that on both a macroscopic and microscopic stage, the most efficient way to distribute patches isn't machine to machine. Distributing patches over a peer-to-peer network like BitTorrent, even, will probably not fly, though I'd wager it will eventually be tried.

The best way to distribute patches for a vulnerability, or counteragents for a virus, is from a central source designed to handle the high traffic. The reasons are twofold.

First, we can establish the trusted nature of the source. When Microsoft distributes a patch over Windows Update, or McAfee sends out the latest DAT files for its virus scanner, these are "trusted" - we know the address of their servers, we know who we're contacting, and we have some assurances of support if something goes wrong in their code. Plus, we have the knowledge that they are testing their code, and that they are not knowingly going to put something malicious into it. The same can't be said for a file we get over a P2P source - while the original seed may be clean, the possibility of someone cleverly altering the code they're sending from their own seed is always present. There's a certain level of trust to shared-file areas that we can't assume if we're trying to distribute something like a patch.

Second, we centralize and minimize the traffic. On the macroscopic level, we can look at Windows Update - which right now has a "forced update" feature that can be set to go off at any time once per day, and which as of XP Service Pack 2 will strongly encourage users to turn it on. On the microscopic level, there's Unix and Linux scripting which can operate on any domain logins, and there's also Software Update Services from Microsoft, which operates in a similar manner. The point to these is that traffic goes off ONCE - each machine checks in with the server to see if it needs patches upon startup and/or login. Presumably, this happens at least once a day in any business.

In this manner, we keep the traffic levels down - only the central patch server, presumably beefy enough to handle it, has the possibility of being hammered at the beginning of the day or upon a shift change. Plus, we have patch dissemination speed roughly equivalent, or even faster than, any counter-worm could provide. We can even tell our central server, assuming it has proper permissions and control over our local network, to break in in the event of a hyper-crucial patch and hit every machine on the network with it. Compare that sort of speed to even our highest-speed counter-worm, and you can see that this is a much better approach.

Again, it's not that a properly coded local counter-worm doesn't work as a solution. It's just that there are alternatives that are far better, and make the time spent coding the counter-worm a relatively bad investment.

Got Comments? Send 'em to Michael (at) Glideunderground.com!
Alternatively, post 'em right here for everyone to see!

Weekly Musings #8 - "Good" Viruses, Part II


Added:  Monday, August 02, 2004
Reviewer:  Michael Ahlf

 1  

[ Back to Articles index ]

Home :: Share Your Story
Site contents copyright Glide Underground.
Want to syndicate our news? Hook in to our RSS Feed.