Linux Gazette

November 1999, Issue 47 Published by Linux Journal

indent

Visit Our Sponsors:

Linux Journal
InfoMagic
SuSE
Red Hat
LinuxMall
cyclades
indent

Table of Contents:

indent

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
indent
Linux Gazette, http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-1999 Specialized Systems Consultants, Inc.
indent

"The Linux Gazette...making Linux just a little more fun!"


 The Linux Gazette FAQ

Updated 22-Sep-1999


Contents

This FAQ is updated at the end of every month. Because it is a new feature, it will be changing significantly over the next few months.


Questions about the Linux Gazette

1. Why this FAQ?

These are the most Frequently Asked Questions in the LG Mailbag. With this FAQ, I hope to save all our fingers from a little bit of typing, or at least allow all that effort to go into something No (Wo)man Has Ever Typed Before.


2. Where can I find the HTML version of the Gazette?


3. Which formats is the Gazette available in?


4. Which formats is the Gazette not available in?

Other archive formats. We need to keep disk space on the FTP site at a minimum for the sake of the mirrors. Also, the Editor rebels at the thought of the additional hand labor involved in maintaining more formats. Therefore, we have chosen the formats required by the majority of Gazette readers. Anybody is free to maintain the Gazette in another format if they wish, and if it is available publicly, I'll consider listing it on the mirrors page.

Zip, the compression format most common under Windows. If your unzipping program doesn't understand the *.tar.gz format, get Winzip at www.winzip.com.

Macintosh formats. (I haven't had a Mac since I sold my Mac Classic because Linux wouldn't run on it. If anybody has any suggestions for Mac users, I'll put them here.)

Other printable formats.

PostScript
You can use Netscape's "print to file" routine will create a PostScript file complete with images.
PDF
I know Adobe and others consider PDF a "universal" format, but to me it's still a one-company format that requires a custom viewer--not something I'm eager to maintain. If you can view PDF, can't you view HTML?
Word
I'll be nice and not say anything about Word....

E-mail. The Gazette is too big to send via e-mail. Issue #44 is 754 KB; the largest issue (#34) was 2.7 MB. Even the text-only version of #44 is 146 K compressed, 413 K uncompressed. If anybody wishes to distribute the text version via e-mail, be my guest. There is an announcement mailing list where I announce each issue; e-mail lg-announce-request@ssc.com with "subscribe" in the message body to subscribe. Or read the announcement on comp.os.linux.announce.

On paper. I know of no companies offering printed copies of the Gazette.


5. Is the Gazette available in French? Chinese? Italian? Russian?

Yes, yes, yes and yes. See the mirrors page. Be sure to check all the countries where your language is spoken; e.g., France and Canada for French, Russia and Ukraine for Russian.


6. Why is the most recent issue several months old?

You're probably looking at an unmaintained mirror. Check the home site to see what the current issue is, then go to the mirrors page on the home site to find a more up-to-date mirror.

If a mirror is seriously out of date, please let gazette@ssc.com know.


7. How can I find all the articles about a certain subject?

Use the Linux Gazette search engine. A link to it is on the Front Page, in the middle of the page. Be aware this engine has some limitations, which are listed on the search page under the search form.

Use the Index of Articles. A link to it is on the Front Page, at the bottom of the issues links, called "Index of All Issues". All the Tables of Contents are concatenated here onto one page. Use your browser's "Find in Page" dialog to find keywords in the title or author's names.

There is a seperate Answer Guy Index, listing all the questions that have been answered by the Answer Guy. However, they are not sorted by subject at this time, so you will also want to use the "Find in Page" dialog to search this listing for keywords.


8. How can I become an author? How can I submit my article for publication?

The Linux Gazette is dependent on Readers Like You for its articles. Although we cannot offer financial compensation (this is a volunteer effort, after all), you will earn the gratitude of Linuxers all over the world, and possibly an enhanced reputation for yourself and your company as well.

New authors are always welcome. E-mail a short description of your proposed article to gazette@ssc.com, and the Editor will confirm whether it's compatible with the Gazette, and whether we need articles on that topic. Or, if you've already finished the article, just e-mail the article or its URL.

If you wish to write an ongoing series, please e-mail a note describing the topic and scope of the series, and a list of possible topics for the first few articles.

The following types of articles are always welcome:

We have all levels of readers, from newbies to gurus, so articles aiming at any level are fine. If you see an article that is too technical or not detailed enough for your taste, feel free to submit another article that fills the gaps.

Articles not accepted include one-sided product reviews that are basically advertisements. Mentioning your company is fine, but please write your article from the viewpoint of a Linux user rather than as a company spokesperson.

If your piece is essentially a press release or an announcement of a new product or service, submit it as a News Bytes item rather than as an article. Better yet, submit a URL and a 1-2 paragraph summary (free of unnecessary marketoid verbiage, please) rather than a press release, because you can write a better summary about your product than the Editor can.

Articles not specifically about Linux are generally not accepted, although an article about free/open-source software in general may occasionally be published on a case-by-case basis.

Articles may be of whatever length necessary. Generally, our articles are 2-15 screenfulls. Please use standard, simple HTML that can be viewed on a wide variety of browsers. Graphics are accepted, but keep them minimal for the sake of readers who pay by the minute for on-line time. Don't bother with fancy headers and footers; the Editor chops these off and adds the standard Gazette header and footer instead. If your article has long program listings accompanying it, please submit those as separate text files. Please submit a 3-4 line description of yourself for the Author Info section on the Back Page. Once you submit this, it will be reused for all your subsequent articles unless you send in an update.

Once a month, the Editor sends an announcement to all regular and recent authors, giving the deadline for the next issue. Issues are usually published on the last working day of the month; the deadline is seven days before this. If you need a deadline extension into the following week, e-mail the Editor. But don't stress out about deadlines; we're here to have fun. If your article misses the deadline, it will be published in the following issue.

Authors retain the copyright on their articles, but distribution of the Gazette is essentially unrestricted: it is published on web sites and FTP servers, included in some Linux distributions and commercial CD-ROMs, etc.

Thank you for your interest. We look forward to hearing from you.


9. May I copy and distribute the Gazette or portions thereof?

Certainly. The Gazette is freely redistributable. You can copy it, give it away, sell it, translate it into another language, whatever you wish. Just keep the copyright notices attached to the articles, since each article is copyright by its author. We request that you provide a link back to www.linuxgazette.com.

If your copy is publicly available, we would like to list it on our mirrors page, especially if it's a foreign language translation. Use the submission form at the bottom of the page to tell us about your site. This is also the most effective way to help Gazette readers find you.


10. You have my competitor's logo on the Front Page; will you put mine up too?

All logos on the Front Page and on each issue's Table of Contents are from our sponsors. Sponsors make a financial contribution to help defray the cost of producing the Gazette. This is what keeps the Gazette free (both in the senses of "freely redistributable" and "free of ads" :)) To recognize and give thanks to our sponsors, we display their logo.

If you would like more information about sponsoring the Linux Gazette, e-mail sponsor@ssc.com.


Linux tech support questions

This section comprises the most frequently-asked questions in The Mailbag and The Answer Guy columns.


1. How can I get help on Linux?

Check the FAQ. (Oh, you already are. :)) Somewhat more seriously, there is a Linux FAQ located at http://www.linuxdoc.org/FAQ/Linux-FAQ.html which you might find to be helpful.

For people who are very new to Linux, especially if they are also new to computing in general, it may be handy to pick up one of these basic Linux books to get started:

Mailing lists exist for almost every application of any note, as well as for the distributions. If you get curious about a subject, and don't mind a bit of extra mail, sign onto applicable mailing lists as a "lurker" -- that is, just to read, not particularly to post. At some point it will make enough sense that their FAQ will seem very readable, and then you'll be well versed enough to ask more specific questions coherently. Don't forget to keep the slice of mail that advises you how to leave the mailing list when you tire of it or learn what you needed to know.

You may be able to meet with a local Linux User Group, if your area has one. There seem to be more all the time -- if you think you may not have one nearby, check the local university or community college before giving up.

And of course, there's always good general resources, such as the Linux Gazette :)

Questions sent to gazette@ssc.com will be published in the Mailbag in the next issue. Make sure your From: or Reply-to: address is correct in your e-mail, so that respondents can send you an answer directly. Otherwise you will have to wait till the following issue to see whether somebody replied.

Questions sent to linux-questions-only@ssc.com will be published in The Answer Guy column.

If your system is hosed and your data is lost and your homework is due tomorrow but your computer ate it, and it's the beginning of the month and the next Mailbag won't be published for four weeks, write to the Answer Guy. He gets a few hundred slices of mail a day, but when he answers, it's direct to you. He also copies the Gazette so that it will be published when the month end comes comes along.

You might want to check the new Answer Guy Index and see if your question got asked before, or if the Answer Guy's curiosity and ramblings from a related question covered what you need to know.


2. Can I run Windows applications under Linux?

An excellent summary of the current state of WINE, DOSEMU and other Windows/DOS emulators is in issue #44, The Answer Guy, "Running Win '95 Apps under Linux".

There is also a program called VMWare which lets you run several "virtual computers" concurrently as applications, each with its own Operating System. There is a review in Linux Journal about it.


3. Do you answer Windows questions too?

Answers in either the Tips or Answer Guy columns which relate to troubleshooting hardware, might be equally valuable to Linux and Windows users. This is however the Linux Gazette... so all the examples are likely to describe Linux methods and tools.

The Answer Guy has ranted about this many times before. He will gladly answer questions involving getting Linux and MS Windows systems to interact properly; this usually covers filesystems, use of samba (shares) and other networking, and discussion of how to use drivers.

However, he hasn't used Windows in many years, and in fact avoids the graphical user interfaces available to Linux. So he is not your best bet for asking about something which only involves Windows. Try one of the Windows magazines' letter-to-the-editor columns, an open forum offered at the online sites for such magazines, or (gasp) the tech support that was offered with your commercial product. Also, there are newsgroups for an amazing variety of topics, including MS Windows.


4. How do I find the help files in my Linux system?

The usual command to ask for a help page on the command line is the word man followed by the name of the command you need help with. You can get started with man man. It might help you to remember this, if you realize it's short for "manual."

A lot of plain text documents about packages can be found in /usr/doc/packages in modern distributions. If you installed them, you can also usually find the FAQs and HOWTOs installed in respective directories there.

Some applications have their own built-in access to help files (even those are usually text stored in another file, which can be reached in other ways). For example, pressing F1 in vim, ? in lynx, or ctrl-H followed by a key in Emacs, will get you into their help system. These may be confusing to novices, though.

Many programs provide minimal help about their command-line interface if given the command-line option --help or -?. Even if these don't work, most give a usage message if they don't understand their command- line arguments. The GNU project has especially forwarded this idea. It's a good one; every programmer creating a small utility should have it self-documented at least this much.

Graphical interfaces such as tkman and tkinfo will help quite a bit because they know where to find these kinds of help files; you can use their menus to help you find what you need. The better ones may also have more complex search functions.

Some of the bigger distributions link their default web pages to HTML versions of the help files. They may also have a link to help directly from the menus in their default X Windowing setup. Therefore, it's wise to install the default window manager, even if you (or the friend helping you) have a preference for another one, and to explore its menus a bit.


5. So I'm having trouble with this internal modem...

It's probably a winmodem. Winmodems suck for multiple reasons:

  1. Most of them lack drivers for Linux. Notice the term "most" and not "all" -- see http://linmodems.org for more about those few that do, and some general knowledge on the subject.
  2. Since they aren't a complete modem without software, even if they were to work under Linux, they'd eat extra CPU that could be better spent on other things. So they'll never seem quite as fast as their speed rating would imply.
  3. Internal modems have their own problems; they overheat more easily, and have a greater danger of harming other parts in your system when they fail, merely because they're attached directly to the bus. The tiny portion of speed increase that might lend is not really worthwhile compared to the risk of losing other parts in the system.

    So, yeah, there can be good internal modems, but it's more worthwhile to get an external one. It will often contain phone line surge suppression and that may lead to more stable connections as well.


    This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
    Copyright © 1999, Specialized Systems Consultants, Inc.,

    "Linux Gazette...making Linux just a little more fun!"


     The Mailbag!

    Write the Gazette at gazette@ssc.com

    Contents:


    Please, readers, e-mail your questions and comments in TEXT format, not HTML. And if your mailer splits long lines by putting an "=" at the end of the line and moving the last character or two to the next line, please try to turn that feature off. Also some mailers turn punctuation and foreign characters into "=20" and "=E9" and the like. I can't reformat those, since I don't know what the original character was! -Ed.

    P.S. This the first time ever I have resorted to blinking text, which I usually despise. I understand some mailers don't allow you to turn off this obnoxious "multimedia" formatting. But if you can, please do so.


    Help Wanted -- Article Ideas

    Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.

    Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.


     Thu, 28 Oct 1999 09:21:39 -0700
    From: Linux Gazette <lg@ssc.com>
    Subject: Filename extensions for web program listings

    Hi, astute readers. Your Linux Gazette editor has a question for you. With this issue, I've started moving program listings that are included in articles into their own separate text files, to make it easier for those who want to run the programs.

    My question is, which filename extensions are safe to use so that they'll show up properly as text files in the browsers? I'm wavering between using a language-specific extension (.c, .sh, .pl, .py, etc.) vs putting .txt at the end of all of them (or .sh.txt, etc.) What about listings that don't have an extension on the source file? They display as text on my browser, but do they display properly on yours?

    Language-specific extensions would be the most ideal, because they offer the possibility of syntax highlighting if the browser supports it. (Does any browser support this?) However, I know I've tried to view files on other sites that I know perfectly well are text-readable, but the browser insists on downloading them rather than viewing them because it doesn't recognize the type. (Of course, that's better than the corollary, where it tries to view .tar.gz or .mp3 files as text.)

    Of course, the ultimate answer is to fix your mailcap entry and MIME types, but that can be tedious. Also, the person viewing the site may not know how to set the MIME types properly.

    So which is better: language-specific extensions, no extensions, or .txt?


     Thu, 23 Sep 1999 14:56:32 -0700 (PDT)
    From: Angelo Costa <angico@yahoo.com>
    Subject: 3-button mouse on X Window System

    Can anybody help me with this simple (I guess) problem? My three-button mouse works very fine on the console, but it doesn't when I "startx". What's going on? How can I solve this problem and start using the middle mouse button under X? Any suggestion will be appreciated.

    Thanx,
    Angico.


     Fri, 24 Sep 1999 17:50:53 -0500
    From: Bret Charlton <bret@bluebonnet.net>
    Subject: Monitor

    I am new to Linux and I am having problems with my monitor. Do you know where I might be able to get some help?

    [We'll need some more information. What exactly is the problem? What kind of monitor and video card do you have? -Ed.]


     Sun, 26 Sep 1999 05:31:15 -0600 (MDT)
    From: Dale M. Snider <dsnider@nmia.com>
    Subject: Linux hangs when out of swap

    When running memory intensive problems, such as animate from ImageMagic or General Mesh Viewer (GMV) out of Los Alamos Labs, and the swap space limit is reached, Linux hangs. Only option is to power off the computer. This is repeatable (sad to say it has been too often lately).

    This does it on the RedHat 6.0 and 5.2 releases. Is there a way to force the application to abort and not the kernel when the swap limit is reached?

    I am using the RedHat 6.0 on a PIII, 500 Mhz Intel computer Linux 2.2.5-15 (root@porky) (gcc egcs-2.91.66)

    Memory:    Total      Used      Free    Shared   Buffers    Cached
    Mem:      257876    254496      3380     21292    203096     23624
    Swap:     136544         0    136544
    

    Cheers
    Dale

    [Take a look at the ulimit command (built into bash and other shells). It tells the kernel not to let this process use more than X amount of resources. I use "ulimit -c 0" to prevent core files from being created. There are several options dealing with memory, although I haven't used them. The most promising looks like -v, which sets "the maximum amount of virtual memory available to the shell".

    I have also had problems when the swap limit is reached, but not exactly like what you describe.. Unfortunately, Linux's otherwise excellent memory manager is not quite up to par in this situation. The kernel is supposed to start killing processes when a critical stage is reached to free up some memory; however, sometimes that doesn't happen properly.

    I had a situation happen while I was out of town where apache and squid zombied for no apparent reason, and there were no error messages in the syslog. I restarted them (after clearing out the PID lock files so they would consent to restart) and they ran OK. Then I realized syslog wasn't running, which was the reason I had gotten no error messages for the past day. Then I noticed there were a whole lot of zombies, and when I tried to "kill -9" them, they remained. "Update" (the daemon that flushes files to disk) was also a zombie. Tried to run "shutdown", but it wouldn't do anything. Tried switching runlevels, still didn't help. Finally I realized init wasn't running! How do you shut down the system when you can't run "shutdown"? The old-fashioned way: close as many files as you can, run sync, press reset, and hope for the best. When it came up again, I had lots of fsck errors, two lost+found files (fortunately non-critical), and all the files I had created or modified over the past day were unchanged. Fortunately, the changes I had made to a text file two days before, which I had worked a whole day on, were still there. When I asked people what could have caused this, the consensus seemed to be the system had probably run out of memory. This is with kernel 2.2.10 on Debian. Fortunately, the problem has not repeated; and doubly fortunate, it didn't happen during the time I was away and had to log into the box remotely to check my e-mail; and triply fortunate, exim (in non-daemon mode) ran fine the whole time. -Ed.]


     Sun, 26 Sep 1999 13:59:49 -0400
    From: Dyslexic <dyslexic@mindspring.com>
    Subject: agp support in linux

    Does linux support the AGP port? I have linux setup on a AMD K6-2 450Mhz with 160Megs of ram 13 gig hd, HP CD-RW, zip drive, SB AWE64 on an EPOX MVP3G motherboard I am using a creative graphics balster banshee AGP card with 16megs of ram.

    I have installed linux mandrake 6.0 (from the cd included with the Maximum linux magazine) during the installation the only error i encontered was that the bootloader wouldn't install.

    right now I am booting with a floppydisk. When linux boots up the resolution is set at about 640 x 480 and is very difficult to work with is there anyway to increse the resalution? I have checked several FAQS and have found nothing helpful, I don't know anyone who uses linux so i'm pretty much flying blind here


     Mon, 27 Sep 1999 11:11:20 +0100
    From: Linda Fulstow <linda.fulstow@easynet.co.uk>
    Subject: epsom 800 printer driver disk

    Linda Fulstow SCOPE 01752 788099

    We need to install above and need a driver installer disk, can you help. e:mail us or please call 01752 788099, we are desperate.


     Mon, 27 Sep 1999 10:58:22 -0500
    From: EuphoriaDJ <eddthompson@ssi.parlorcity.com>
    Subject: iMac and Linux ethernet (& FreeBSD maybe)

    I have all the wires hung, the hub powered and the computers on. I would like to share files between my iMac and Linux box and later on when I get it running my 68kFreeBSD Mac. Also I would Like to serve X windows to the Mac from Linux.

    Any help would be excellent.
    TTFN

    An Elephant: A mouse built to government specifications
    Never try to out-stubborn a cat
    Natural laws have no pity


     Tue, 28 Sep 1999 03:51:36 +0000
    From: Ben <benvh@wish.net>
    Subject: AT-command error message

    Whenever I try to run "at" I get an error message, like so:

    root@benzz:> at 10:15 command Only UTC Timezone is supported. Last token seen: command Garbled time

    This is actual output. My system _is_ on UTC timezone, the at man-page didn't help a bit. Someone suggested that I should write a file:

    echo command> file at 10:15 cat < file

    but that wouldn't help, as "at" is still in there, and it's "at" making trouble. Does anybody know what I'm doing wrong? Or just another way to schedule tasks? I'm getting desperate now...

    May the Source be with you.


     Wed, 29 Sep 1999 22:24:29 -0700
    From: Wayde C Gutman <wcgutman@mwpower.net>
    Subject: LS120

    This is a multi-part message in MIME format.

    I would like to know exactly what I need to input into the /etc/fstab concerning having OpenLinux 2.2 to see the LS120 drive. My system has the 1.44 floppy drive at fd0, hard drive at hda and hda1, and the cdrom at hdc. I tried the approach Caldera suggested for the owner of OpenLinux 1.3, it didn't work or I messed up, which is possible since I am still a greenhorn at this.


     Fri, 1 Oct 1999 16:43:35 +0800
    From: u <leeway@kali.com.cn>
    Subject: program that play Video Compact Disk (VCD) and more

    i have RH 5.1. Is there any program that play Video Compact Disk (VCD)? Last month I posted the same question from leeway@tonghua.com.cn. Unfortunately, the free eamail-box does not work now and i get no help. I apologize and thank anyone who replied.

    Something very strange happens: My Win97 can't use CD-ROM after a week of installation but Linux has no problem. Now Win97 uses MS-DOS compatible mode to access hard disk and it's slow. Anybody has any idea on this?

    i use mixviews of Debian to record a wave file input from a cassette player. I use 3k sample rate/16 bit and it plays fine. But the effect is terrible on recorder of Win97. When i change the sample rate to 8k, it's OK on both. Why? Is there any wave-to-mp3 util? mixer does not remember the setting so i have to adjust it each time. How to solve it?

    PS: is there any icons for redhat and debian so I could use to launch Linux from Win97? i already made the shortcuts but can't find good icons and i'm not good artist. I will appreciate if you could email icons to me.


     Sat, 02 Oct 1999 21:29:21 +0200
    From: 2095910 <albert.prats@campus.uab.es>
    Subject: Tryin' to install a Diamond SupraExpress 56i V PRO

    I have a problem with my new modem. I tried to install it under Red-Hat 5.2 but it doesn't work. My modem is an internal Diamond Supra Express 56i V PRO and under W98 the default configuration is irq 12 an I/O port 0x3e8. Under W98 it works perfectly and i don't think this is a "winmodem"(isn't it?). Windows "says" that under DOS it must be configured with: COM 3, irq 4 and I/O port 0x3e8 (/dev/ttyS2 isn't it?)

    I just want to know if this is a winModem or not and how can I install it.


     Mon, 4 Oct 1999 18:08:56 -0400
    From: The Wizard <wizard@openface.ca>
    Subject: My Windows partition hasd full access for root only

    I have 2 questions:

    I have partitioned my HD in 4 partitions.

    1. 1 - Win98 (Filesystem is FAT-Win95)
    2. Linux Swap
    3. Linux OS
    4. Personal Data (Filesystem is FAT-Win95)

    Questions 1.
    Both the FAT-Win95 Filesystem Partitoins get mounted properly in Linux but the problem is that only root has read/write/execute permission. The other users only have read/execute permissions.How can I have it set up so that everyone had r/w/x permission to the mounted filesystems (and all the subdirectories within them)

    Question 2.
    If I access any file from the FAT-WIN95 filesystem and make a change to it within Linux, when I boot in windows, that file is marked as "read only". Any idea why this is happening and how I can stop this from happening?

    Maybe the two are related. Any help will be greatly appreciated.


     Wed, 06 Oct 1999 01:22:19 -0700
    From: Zac Howland <howla_j@cs.odu.edu>
    Subject: Diamond HomeFree Phonline Home Networking

    I recently bought a Daimond HomeFree Phoneline Networking kit. It works great in windows, but i use linux most of the time on my pc and was wondering if anyone knew how to set it up for a linux machine. My pc is the "Administrator" so I need it to work so others in my home network can still access the net while I am working in Linux.

    Thanks


     Wed, 06 Oct 1999 13:05:18 +0200
    From: Sandra Uredat <a2844745@smail.Uni-Koeln.DE>
    Subject: KDE slower than windoze?

    Hi all,

    I've just installed linux on my Acer Notebook 370 and I thought everthing works fine. But when I'm running KDE it takes e.g. about 5 minutes to open Netscape!!! Is anybody out there who knows what's wrong with my installation???

    Thanx in advance
    Sandra


     Sat, 9 Oct 1999 04:31:52 +0900
    From: Ganesan <cs7505@cs.inf.shizuoka.ac.jp>
    Subject: CDROM MOUNT FAILURE DURING INSTALL

    I am trying to install REDHAT LINUX 6.0 to my Note-PC. but I can't get it done.

    I always get the message mount failure.

    I searched FAQ,but all the question is about how to mount after installing.

    I am getting this message after my PC search for the PCMCIA card. My PC managed to find my PCMCIA-SCSI Card(ADAPTEC 152X) but after that the message says "CDROM Mount Failure-Block device required"

    Please tell me how to do it.

    Thankyou.
    Ganesan


     Sun, 10 Oct 1999 09:23:42 -0400
    From: Brad Renner <banner99@iapdatacom.net>
    Subject: LINUX for a 486

    I read about LINUX in a recent issue of a computer magazine. I really don't use my PC for much of anything but work.(I use it to run a Roland PNC 1410 vinyl cutter) I am, to say the least, Curious about LINUX. I also have an old Toshiba satellite T1910CS. It's a 486 with 4 megs of RAM and I believe a 200 meg hard drive. A friend of mine was going to throw it away so I took it. I would love to experiment with LINUX if there is a version available that will run on it. Windows 95 just crawls on the thing, and I've recently been using DOSSHELL. The only thing I really will be using it for is keeping track of customers, printing invoices, and E-Mailing my wife.

    Thanks
    Brad Renner

    [I have a Toshiba Satellite 486-75, 16MB RAM, 500MB HD, and it has been running Linux for four years.

    I would not recommend trying to install Linux on a 4MB machine if you're not familiar with Linux. It would have to be done the "old-fashioned way", without the automatic installation utilities the current distributions have. You would have to use an old kernel (perhaps from the 1.x series). For your efforts you would get a server that could perhaps be used as a one-purpose dedicated server or as a dialout terminal, but that's it.

    I used to work for an ISP where we used 386s (8MB RAM) and then 486s (16MB RAM) as routers. The 386s worked fine with the then-current version of Slackware (this was 1996), although we upgraded the memory to 16MB on the higher-traffic ones. The worst problem was never knowing when the ancient hard drives would fail. The 486s (1998) were much more reliable.

    Linux is very scalable and can be used on a wide variety of machines, but of course some of its features aren't usable on lower-end machines. You didn't say what capacity your desktop PC has. I would consider 16MB a minimum amount of memory for a general-purpose machine that is not running X-windows, and 32 (or more) if you want to run Netscape or an office suite. -Ed.]


     Mon, 11 Oct 1999 13:26:21 +0200
    From: Mr. Tibor Berkes <berkes_t@netlock.net>
    Subject: TAAKACS+ and RADIUS

    I would like to know whether the TAKACS+ and RADIUS authentication servers for Internet Service Providers can authenticate by x509 certificate which can be found at the customer, so at the Dial-Up Networking there isn't Log-in and Username.

    I look forward to hearing from you,


     Tue, 12 Oct 1999 00:31:10 +0200
    From: Th. Fischer <frosch@cs.tu-berlin.de>
    Subject: Compiling everything myself

    Greetings, ladies and gentleusers.

    I would like to compile my own Linux system. Not just the kernel. Everything. I've got enough room and partitions on my disk(s) to do it. Do not tell me do buy a distribution. Until now, I've tried a lot of them - I count eleven on my shelf - I do not like one of them the way I would like a self-created one.

    I just need a place to start. All of the distributions must have started at some point or another - how did they do it? Please point to a location where info may be obtained. The LDP seem to provide _nothing_ concerned to this task.

    Every hint will be highly appreciated. I would also love to contribute documentation of the process to the Free Software community.

    Every reader is invited to answer via email.

    Thorsten Fischer


     Tue, 12 Oct 1999 09:50:02 -0600
    From: Tom Miller <tjmiller@DATC.TEC.UT.US>
    Subject: Looking for suggestions and ideas for a Linux-based class

    By way of introduction, I am a computer and networking instructor at Davis Applied Technology Center (- the ATC's are Utah's equivalent of VoTech schools.) My original industry background is in *ix-networked mainframes, LAN/WAN architecture, and in mixed-environment networks ( mixed, as in putting *ix and MS in the same coherent network- I even went and got an MCSE to certify in the MS half of it)

    Digression aside, this is my predicament:

    Having recently come on board as a instructor here at DATC, I had noticed that the UNIX curriculum was way out of date, and had but a single small class (they were still teaching an older version of SCO-Unix as the core OS.) I proposed to update it, and a deadline of January 2000 was set for the basic course, March 2000 for the advanced/sysadmin level course (though it should be done at about the same time as the basic course.)

    Currently, I have a basic course outlines (using Linux as the core OS), and have found the textbooks for the courses (which I have split up into basic *ix and advanced sysadmin-level *ix)

    My question to all of you in the industry is this: What parts of Linux, and the networking of same, are most important to you? Should there be more concentration in TCP/IP fundamentals (which I have included), specific Linux/*ix-based programs ( KDE, Gnome, Apache), or which? What is it that you most desire in an entry-level (or not-so-entry-level) employee candidate?

    I do have a structure based on my own opinions, yes, but since our mandate at DATC is to match industry needs, I wanted to get the widest base of opinions possible.

    (An aside - I know that Red Hat is working to get a cert program together, but until it gets in place fully (and until all the testing centers carry it), I've got a curriculum to build.)

    Please feel free to send all of your ideas, suggestions, and a brief description of why they should be implemented to me, here at tjmiller@datc.tec.ut.us . Especially encouraged are those in the industry who hire entry-level IT professionals. I would also appreciate a brief description of what your company does in the industry, if you would be so kind.

    My gratitude in advance,
    TJ Miller jr


     Tue, 12 Oct 1999 15:21:53 -0400 (EDT)
    From: Roberto Novoa Quiñones <rnovoa@ucfinfo.ucf.edu.cu>
    Subject: Desde Cuba

    Un saludo ante todo. Tuve la oportunidad de leer en Internet un artéculo de esta revista y por eso quiero mantener correspondencia con ustedes y si les es posible enviarme a esta dirección información sobre el efecto 2000 y las consecuencias que esto traerá para la economéa o para cualquier rama, no especéficamente de la economéa.

    Soy estudiante de la Facultad de Ingenieréa Mecánica de la Universidad de Cienfuegos y actualmente estoy cursando el tercer año de la carrera. Su ayuda me será de gran satisfacción ya que por otros medios no puedo obtener esta información. Gracias.

    Fraternalmente,

    Marco Novoa Quiñones.


     Tue, 12 Oct 1999 23:50:19 -0700
    From: Ken Deboy <glockr@alternavision.com>
    Subject: Source for ls command?

    Hi, I'm looking for source code for the ls command on my Redhat (4.2) CDROM under the SRPMS directory, but I can't find it anywhere. I also did a 'find / -name ls* -print' of my system, and it found the binary but not the source file. Can you please tell me where it is? Thanks:)

    [It's part of a larger package. I use Debian, so I would type:
    	dpkg -S ls | grep bin/ls
    	
    (The grep is there because of the large numbers of hits on the bare substring "ls".) This shows which package contains the file. fileutils: /bin/ls See the rpm manpage; there should be an option that does a similar thing. In any case, the package is probably called "fileutils" on RedHat too, since both distros got it from the same source. -Ed.]


     Thu, 14 Oct 1999 20:18:03 +1000
    From: Hakon Andersson <hakon@netspace.net.au>
    Subject: i740 AGP

    I wish to run my i740 AGP under Linux. I am a Linux newbie though. I was wondering if you could tell me, or direct me onto some resources on how to setup my i740, or which server to install during installation. I am installing Redhat5


     Thu, 14 Oct 1999 17:13:34 +0530
    From: uday rajput <udayrajput@hotmail.com>
    Subject: final year engg project on VPN

    sir, I am A final Year student in India toiling with the Idea of a VPN as A final year project. Virtual Private Network is A Virtual concept for me till now desperate need for help as time is running out .

    resources at hand:


     14 Oct 99 18:33:36 MDT
    From: Wasim Ahmed <gracewasim@usa.net>
    Subject: Creative 3D Exxtreme Driver Needed for Linux

    I'm a Newbie in Linux, but i have a Great Background on Computer Field Right now I'm using Win 98, NT. I have used Linux before. Right now, I have 233MMX, 40MB Ram, CD-ROM, 5.1GB HDD, 100 ZIP Drive, Creative AWE64 Sound Card & Creative 3D Exxtreme Graphics Blaster.

    Now I have installed Red Hat Linux 5.2. Installation was succesful. But my X Window is not running, cause i don't have driver for 3D Exxtreme.

    Can u pleasee help me by providing the Driver or can u tell me where I can find it?

    Please, that will be Great Help to me..


     Fri, 15 Oct 1999 10:51:15 +0200
    From: Stephan Schulz <sschulz@cvbg.stl.sn.schule.de>
    Subject: Need some Help installing Vodoo3000 AGP on Linux X

    Is there a free X-Server for Vodoo3000 AGP cards? If yes plaese tell me where and how to use it!


     Fri, 15 Oct 1999 08:31:55 -0700
    From: Linux Gazette <lg@ssc.com>
    Subject: Re: sample

    On Fri, Oct 08, 1999 at 02:15:08PM +0800, ÕÅÆæ wrote:

    Suppling some 8-10 sample installation plans will be of great help to the beginners of my type.

    Hi. What do you mean by sample installation plans? Do you mean a list of packages to install? Step-by-step installation instructions? Or something else?


     Fri, 15 Oct 1999 08:50:15 -0700
    From: James M. Haviland, RN <jhavilan@oz.net>
    Subject: Re: Linux Gazette #46 (October 1999) is available

    If I download the "tar" file [of the Linux Gazette], but how do I read it? I've have OpenLinux 2.3 installed at the moment. I do have 2.3, but do seem to be able to install it. Doesn't like my CD player(?).

    TIA

    Oh, yes this Eudora Lite. I like it better than the reader that comes with 2.3. I didn't find Pine in the install. Yes, I can download the tar file, but how to install it is another question.

    [Download lg-base.tar.gz and the lg-issue##.tar.gz files of your choice. Run tar xzvf lg-FILENAME.tar.gz for each file. They will all expand into a subdirectory "lg". (Run man tar for an explanation why.) Then in your favorite web browser go to the URL file:/FULL-PATH-TO/lg/index.html (using the real full path, of course). index.html is a symbolic link to index.html. -Ed.]


     Wed, 23 Oct 1996 11:24:47 -0400
    From: Thomas Russo <webmaster@baybiz.net>
    Subject: su not working

    Hello. I am writing because as of 2 weeks ago I have lost the ability to su to root. I can still log in as root. I can also su from root to normal users. I am running Red Hat 6.0 with the kernel 2.2.5-15 on i686. I am currenly running a live Apache web server version 1.3.6. I have been told that the loss of the ability to su to root could be a sign of an intruder. I am hoping this is not true. I am further hoping that this is just some setting that can adjusted to remedy this. I am at a complete loss as what to do I am hoping that you can help me with this. If there is anyother information that I have left off I apologize. Thank you in advance

    The Editor wrote:

    I don't have answers, but some possible strategies:

    1) Check /etc/passwd (and /etc/shadow if it exists) for any users besides root with UID 0. These should probably be removed, or at least put an 'x' or '*' or something at the beginning of the password so they can't log in.

    2) Change your root password (and other passwords).

    3) Check for all programs called "su" on the system. Only one should exist, /bin/su. The others could be trojan horses. Do you get the same behavior if you type "/bin/su" to run it rather than just "su"?

    4) Reinstall the package containing /bin/su. (shellutils?)

    5) Read the man and/or info pages for su carefully: there may be a configuration file somewhere that determines who can su.

    6) What error message did you get? Login incorrect? Permission denied? Forbidden to su root as this user or on this terminal?

    7) Are you using shadow passwords? There could be an inconsistency in the password configuration: are all the passwords in /etc/passwd *'d out? Or is there a password in /etc/passwd that is different than /etc/shadow? Shadow passwords are supposed to be an all-or-nothing approach, but sometimes one gets inconsistencies in that some programs (login, passwd, su, getty, adduser) use/modify the shadow file and others don't. I would not expect this on a modern Red Hat installation, though. If you do notice a discrepency, all login/authentication packages should be replaced, and have a boot floppy handy in case you lock yourself out of your system.

    8) Are you using NIS? This would add another layer of complexity which I'm not qualified to comment on.

    Thomas wrote back:

    [parts of e-mail deleted]

    I can't find anything [in the su man page] except a mention of the wheel group

    I get incorrect password...when it is the correct password I have now determined that any su involving a password fails for the same reason...incorrect password

    I am using shadow passwords.. I have found an inconsistency. In passwd there is one user named ken (as it should be) however in shadow there is a ken and a Ken(should not be a Ken). So according to you I should replace all the packages for login, which I have not done yet, nor am a sure how to do. Are they RPM's ( I hope)?

    No on the NIS

    The Editor responded:

    If your system uses a wheel group, only people in the wheel group are allowed to su root. Add your username to the wheel group in /etc/groups. You'll then need to log out and back in again. Run the command "groups" to see which groups you're in.

    /etc/passwd and /etc/shadow should not have any lines that aren't either genuine users or the pseudo-users that came with the OS (audio, floppy, dip, nogroup, users, etc) or installed by packages (majordom, news, irc, etc). The pseudo-users normally have a password "*" to prevent anybody from logging in as them (except "news" perhaps if you have a news administrator that needs to be user "news" to do administrative work).

    There should be RPMs for all the login-related programs. Look through the descriptions of packages on your CD and you should find them. The shadow utilities will be in a separate package.

    I would fix any inconsistencies or unauthorized users in the passwd and shadow files first, and then reinstall the packages if things still aren't working right.

    Thomas wrote again:

    I really appreciate all your advice. I have found the problem. Apparently the rights to /bin/su were set to rwxr-xr-x instead of rsxr-xr-x. I feel really stupid for overlooking such a thing. I still don't know how it got changed. I am guessing that it was not an intruder, I cannot see a motive to do such a thing...but who knows. I still have that strange user that did not belong. I just edited him out of shadow. Once again thanks.

    The Editor lamented:

    Ach, I didn't even think about that.

    Thomas added:

    Probably not worth printing anymore huh. Once again thanks. If you ever have any thoughts on how that extra user got into shadow feel free to let me know.
    Thomas

    The Editor concluded:

    Regarding the unknown user: considering it's your own name, it may be that you typed it that way at some prompt during the initial installation.

    Regarding the e-mails: they're still worth printing because they may help somebody else.


     Sun, 17 Oct 1999 14:25:10 -0400
    From: CYBERSTORM <cyberstorm@prodigy.net>
    Subject: LINUX!!!!!!!!!!

    I can't figure for the life of me.........why is it so hard to get a modem recommendation for LINUX??? This is all I'm asking for! I've seen the text on what not to use, but no information on what to use. hmmmm.....

    1. A modem for Linux on an IBM aptiva.... once more, any recommendations???

    Anybody!!

    The Editor wrote:

    I don't know the IBM aptiva, but assuming it's an ordinary PC...

    Any modem that's not a Winmodem is fine. If it says on the box that it works with DOS and/or Macintosh as well as Windows, it should work. I use the US Robotics Sportster, but modems have standardized enough now that they should be pretty interchangeable. External modems are easier to configure than internal ones, because you have the status lights to tell you what the modem is doing, and you don't have to muck about with Plug-n-Delay or whether another device (a serial port?) is using the IRQ. Some people also suggest external modems are better because the heat they generate belongs outside the computer case.

    If you intend to use a 56K modem, verify with your ISP which modems are compatible with their equipment at 56K. -Ed.]

    Cyberstorm responded:

    Thanks for the extended information....it was all I wanted. Accept my apologies for the message sent earlier.... it's a bit frustrating when your on a schedule. You've been very helpful to me.


     Mon, 18 Oct 1999 10:07:16 -0200
    From: Erik Fleischer <ferik@iname.com>
    Subject: Installing Red Hat 6.1

    Hello, there.

    I have successfully downloaded Red Hat 6.1 and burned a CD, but when I try to install it -- either using AUTOBOOT from DOS or the boot disk produced with RAWRITE and BOOT.IMG -- I always get the same error message:

    running install...
    running /sbin/loader
    exec: No such file or directory
    install exited abnormally
    sending kill signals 
    etc.
    

    I have checked that there are no missing files in the stuff I downloaded, but I cannot find /sbin/loader, which is obviously a problem.

    Any suggestions?
    Erik


     Mon, 18 Oct 1999 19:27:54 -0700 (PDT)
    From: john saulen <johnsav@yahoo.com>
    Subject: install modem and printer

    I've recently installed Linux 6.1.I am having a problem installing my Zoom 56k modem as well as my Lexmark 3200 color inkjet printer.I have been to the Lexmark site and there are no drivers for Linux.Any help in this matter would be appreciated.


     Tue, 19 Oct 1999 10:49:41 -0700
    From: Dr. Nicholas Graham <ngraham@ucsd.edu>
    Subject: Dual PIII Xeon performance

    I do some intensive (multi-week runs) ocean modeling on my Dell 610 w/ a PIII 500 Mhz Xeon. I am having a hard time finding out whether a second PIII will improve the speed of a single process, or only for multi-processes. Either way would help, but it would be nice to know before laying out the $.

    Thanks - Nick Graham


     Tue, 19 Oct 1999 16:45:40 -0500
    From: Danny R. <danny@josifa.com>
    Subject: How can 3 stand-alone PCs be hooked up with ADSL?

    I am considering to subscribing (SWBell) ADSL from South Western Bell (without subscribing their Internet Service). My current ISP and Web Hosting service provider is UNICOMP. I have 3 emails and 3 stand alone PCs (no LAN connection). Each PCs need to access to the internet daily.

    Thank you for your attention and I am looking forward to hearing from you soon.


     Wed, 20 Oct 1999 12:27:45 +0200
    From: Giovanni Rizzardi <Rizzardi.Etnoteam@italtel.net>
    Subject: Modem performance ...

    I am not a new comer because it is four years that I am happily using Linux.

    Two months ago I bought a modem and I did not have any problem putting it at work but for a strange difference in the performance when I connect using Linux or Win95 (both on the same PC but in different partitions): the connection speed using Linux is around 30,000 while using Win95 is aroud 50,000.

    Till now I have not understood why, is there anyone that can explain me where is the bug ?

    My modem is a 3Com Sportser Flash V90 and Linux is RedHat 6.0

    Many thanks,
    Giovanni.


     Thu, 21 Oct 1999 02:15:30 +0200
    From: Altair <aitor.sm@teleline.es>
    Subject: Free-Mathematica??

    Sorry, I don't know if I'm sending this email to the adequate address. Sorry 2: for mistakes with my English.

    Question:

    Mathematica is becoming one of the more popular programs to deal with symbolic mathematics.

    Is anybody in this world trying to create a Free-Mathematica for Linux?

    If the answer is yes, I wouldn't mind to help if possible, I'm a Mathematician.


     21 Oct 1999 08:20:03 -0700
    From: <akudesia@123india.com>
    Subject: Compaq Proliant Fast wide SCSI-2

    Hi there

    Trying to install RedHat Linux 6.0 on Compaq Proliant 2000 and Installer program can not detect on board SCSI controller (Fast Wide SCSI-2). I tried all listed but none of them works.

    Where to find?


     Thu, 21 Oct 1999 20:46:51 +0100
    From: oliver cameron <oliver@hii.co.uk>
    Subject: Small Business Server

    Can anyone direct me to an article on setting up a linux server for windows/NT clients similar in functionality to Microsofts expensive and unreliable "Small Business Server"? I need a linux box with a proxy server (Squid), Sendmail , an ISDN connection with automatic dialling configured, a fax server, a file server with samba, automated back-ups and printer support. Has anyone produced a readable article on the subject as I have found the various HOWTO's depressingly complicated. Maybe I have missed something obvious but I have not seen any articles devoted to setting up a simple LAN server under Linux. Any help would be greatly appreciated.

    Oliver (oliver@hii.co.uk)


     Thu, 21 Oct 1999 15:56:49 -0400
    From: Anthony Mancuso <am5008@cnsvax.albany.edu>
    Subject: 810 chipset

    I was looking through the questions and responses in the gazette here, and I came across a question about the onboard video card, Intels 810 chipset. I am also having problems with it. However, you didnt write a response to the person. I was wondering if you had any solutions to this problem, or any ideas. If you could help me out I would greatly appreciate it.

    Tony
    am5008@cnsvax.albany.edu

    [I stay away from answering video card questions because I don't know much about the different cards. I am very cautious with hardware, and buy only models that I have heard other Linux users say good things about during the previous year. Thus, I have a Diamond Stealth which has worked wonderfully for years and a Matrox Millenium II which has more video memory but apparently a bad RAMDAC (the picture blanks out for a second at random moments). It was warranted for a year, but of course the model got retired before that, so I never did bother sending it back to the company; I just moved it into a server where it could run in text mode. -Ed.]


     Thu, 21 Oct 1999 17:12:00 -0400
    From: Max-zen <onqams@muss.cis.mcmaster.ca>
    Subject: comparision

    Why would I want to use Linux as opposed to Windows??? Could you give me a comparision or give me some sites to look at??


     Sat, 23 Oct 1999 12:42:40 +0800
    From: Zon Hisham <zon@mad.scientist.com>
    Subject: Norton Kill my LILO

    My wife ran Norton antivirus and detected that the MBR was changed. She checked the 'Repair' box.

    Now my LILO is gone. How do I install it back into the MBR?

    Currently using RedHat 6.0.

    rgds.


     Sat, 23 Oct 1999 11:29:34 +0530
    From: R . A. PATANKAR <srp@pn3.vsnl.net.in>
    Subject: sis 6215c driver needed

    i have a sis 6215c graphics card. where can i download a linux driver for the card.


     Sat, 23 Oct 1999 12:26:03 +0100
    From: CMFD <rena1@jet.es>
    Subject: Diamond SpeedStar A50

    Hello, I would like to know if Diamond SpeedStar A50 graphic card is used and supported to install XWindows in Red Hat Linux, because it has given many problems.Or if there are some drivers to upgrade this card. Thank you for your attention.


     Sun, 24 Oct 1999 19:26:48 -0400
    From: E-man <falcon65@mindspring.com>
    Subject: fsck

    I was running RHL6.0 w/o a UPS, when all of the sudden the power went out. The system rebooted and started to to a forcecheck scan, this is not the first time this happened. There were problems and I was told to run fsck from a boot disk.

    LONG STORY SHOT: there are problems with it not seeing files, or someting like that, and now I don't have X, its gone and PROC wont load.

    Any clues as to what could have happened?

    I'm a newbie, about 2 months old. I already miss my Linux!!!!


     Mon, 25 Oct 1999 11:57:40 +0200
    From: Juan Pazos <jpazos@teleline.es>
    Subject: Connecting Linux to NT

    I want to connect from my Linux home box (over a analogic line and using PPP) to my office Windows NT Server; I try to find some HOWTO about it but I can not get it. Do you know where I can get it?


     25 Oct 99 08:36:39 MDT
    From: Syed Adnan <kundalani@usa.net>
    Subject: RIVA tnt 2

    I own a Riva TNT 2 Value Graphics card. I'm having a serious problem with installing Linux 6. I've never used Linux before and have treid installing Linux several times using differnt servers(I dont even know what those are). I guess my normal shell screen is working properly but the Linux GUI is not loading... do i need a specific driver for Riva tnt and if so then how do I install it through the shell.

    Regards
    Adnan

    P.S Where exactly are the answers to thees question published? The Editor wrote:

    The question will be published in the Mailbag section of the November issue, to be published this Friday. People will send responses to you directly with a cc: gazette@ssc.com. Responses will be published in the 2-Cent Tips section of the next issue.


     Mon, 25 Oct 1999 19:02:08 +0200
    From: Fred Van der Linden <els11867@skynet.be>
    Subject: HP890C

    Can anyone send me a driver for HP890c printer.

    Many thanks for answering,
    Fred Van der Linden


     Mon, 25 Oct 1999 20:02:06 +0100
    From: Tom Kidd <chewbaca@tomsdig.freeserve.co.uk>
    Subject: Dialing Up my ISP

    I was wandering if it is at all possible to use my current ISP account (with Freeserve) through REDHAT Linux. If So why does it always Crash, If not, Wat kind of Account(what ISP) do you Recomend? Sincerly Tom


     Mon, 25 Oct 1999 19:38:13 +0000
    From: roselin leong <zczcr14@ucl.ac.uk>
    Subject: Research

    I am a university student at the University College London. I was wondering if I could get some help here. I am currently working on a dissertation on open source, incorporating case studies on Linux, Netscape and so on. I will also be looking at the changes open source have affected closed source softwares.

    One part of my research is where I am analysing the business model of Linux (from Redhat) . However I fear by going to Redhat's website the information about it's product may be biased and I may not be able to get an all rounded opinion. Hence is there any links (apart from the technical ones) that you could recommend?

    Thank You.


     Wed, 27 Oct 1999 00:54:44 -0500 (CDT)
    From: Eric Agnew <agnew@spfc.org>
    Subject: mail bag q: 1-way cable modem woes

    6 days of scouring the mini-HOWTO, the web, deja, the linux-net archives, and trying every imaginable route(8) configuration have left me nothing short of frustrated...

    I just got a new com21 from Prime Cable in Chicago, which, of course, "doesn't fully support" anything but win95/98/nt (which, of course, it works fine under). Of course, I'd much prefer to have it up on the linux box, so all the machines can use it (currently 3-5 machiens share a 28.8 connection- ugh!).

    They give you a username/pw, a number to dial into w/ a regular modem, and have you set up the ethernet as 10.0.0.1/255.255.255.240.

    I can dial in ok, ping other machines on the same subnet (which come back over ppp0), etc. Problem is, the only thing I get on eth1 are arp who-has packets and pings from 10.0.0.14 (which I'm guessing is their router) to other hosts on the subnet.

    I'm running debian potato w/ a fresh 2.2.13 kernel & just about every networking option compiled in. I've disabled ipchains & eth0 (internal lan) until I can get this thing working. I've tried every set of ifconfig & route commands I could think of, & still nothing.

    If anyone out there has a similar cable/ppp setup, the outputs of 'ifconfig' and 'route -n' would be of immense help (all I've been able to find are RH sysconfig examples), as well as anything unusual (special ppp hacks, kernel modules, etc.) that was necessary to get it to work.

    Any help greatly appreciated. Thanks.


     Wed, 27 Oct 1999 10:19:20 -0700
    From: Irfan Majeed <irfan@indiagate.com>
    Subject: Re: Please Help We are using POP3 on linux. Can a particular user will bw restricted to send outgoing mail ? For an internal mail auto respondent mail is possible ?

    What is it exactly you want to know?

    • Whether a certain user can only send mail but can't receive it?
    • Whether the user can be forbidden from sending mail, but can still receive it?
    • Is it possible to set up an automatic response message that will go out if the user receives any mail?

    Which mail transport program are you using? (Sendmail, exim, smail, qmail, postfix, etc)

    People normally use pop to receive mail, but they send mail directly to the mail transport program. So it should be possible to restrict one without the other. I don't know how you would configure the restriction, though. Mail transport programs have a configuration option to prohibit relaying from certain domains, but I don't know if that can be used to reject mail from certain local users.

    Autoresponders usually work via the "vacation" feature in mail transport programs, normally using a .vacation file in the user's home directory with the body of the message.

    Pop usually works on top of the normal mail-reading mechanism. That is, the mail transport program delivers received mail to the user's spool file, and then the pop program acts like a normal mail reader and picks it up from there.


     Wed, 27 Oct 1999 23:47:40 -0500
    From: Smita Narla <snarla@cse.unl.edu>
    Subject: Re: thank you and please send me more.

    I'm doing a general research on survey of testing techniques used in open source.For that thing i need a questionnaire.i need to send these questionnaire to 200 developers and from their feedback i 've to analyse.For example one question might be "When do you consider testing to be complete?".My advisor told me that developers will feel comfirtable if they are to answer multiple choice questions so now hope i made my need clear. can you please help me now. i'll be glad if u could send me some questions and some mail addresses of developers who r using linux to develop their applications.. ineed some 15 of them.

    awaiting ur response, smita.

    The Editor wrote:

    1) Is open source a permanent change in the software industry, or just a passing fad?

    2) What do the controversies regarding the differences between the open source licenses (GPL, BSD, Artistic, Mozilla, etc) imply for the future of open source? Are they hampering the movement?

    3) Some people say that the proliferation of software patents is going to destroy the open source movement. Is this true?

    4) Is it possible to earn a living by writing open source software?

    Go to www.opensource.org, www.cosource.com and www.sourcexchange.com and look through their web sites. That may give you some ideas.

    Smita responded:

    thanks alot for the help. i t gave me some ideas of where to search for my kind of thing. but i need some information about the TESTING METHODOLIGIES ------like how people test opensource ----what kind of testing techniques they use---------.

    i'll really very glad if you can send me some more questions like this. smita

    The Editor asked:

    To suggest techniques, we'd need to know what the goal is. Why would people be testing open-source software? What would they be looking for? Bugs? How well the program functions compared to a similar closed- source program?

    Open-source programs are usually tested by their developers and some of their users --- in other words, by the people who need the programs to function correctly. There is frequently some kind of informal organization of volunteers which accepts bug reports and ensures they are followed up on.

    Organizations such as Linux distributions that don't write a lot of software themselves but instead repackage other people's software, will also do their own testing, to ensure the program conforms to the standard the distribution has set for all their packages. For instance, the Debian distribution follows the Linux Filesystem Standard, which specifies that configuration and data files and belong in certain directories. The distribution maintainers may modify the program slightly to make it confirm to this rule, then test it to ensure it does. The distribution receives bug reports both about its own errors (which it fixes itself) and errors regarding the program's internals (which it forwards to the program's own development team).

    Is this the kind of testing you're talking about? If so, your best bet would be to talk with developers of open-source programs. They can tell you how their particular programs are tested, which should give you an idea how open-source programs in general are tested.

    Smita responded:

    I'm doing a general research on survey of testing techniques used in open source.For that thing i need a questionnaire.i need to send these questionnaire to 200 developers and then analyze their their feedback.For example one question might be "When do you consider testing to be complete?".My advisor told me that developers will feel comfirtable if they are to answer multiple choice questions so now hope i made my need clear. can you please help me now.

    i'll be glad if u could send me some questions and some mail addresses of developers who are using linux to develop their applications. I need some 15 of them.


     Thu, 28 Oct 1999 16:00:33 +0530
    From: Neelu Gora <ng@aitpl.stpn.soft.net>
    Subject: LINUX-Display Driver help

    Hello,

    I have Linux 5 (SUSE) installed on my PC at home. When I try to start xwindows , it gives error message for the missing XFree86 display driver. I have been trying to find the suitable display driver on the net, but could not find it.

    Display chip type is SIS 6215crev21. Could you please tell me from where can I get it ?

    Thanks, Neelu.


     Thu, 28 Oct 1999 14:54:48 +0100
    From: Network Desktop User <G.F.Wood@shu.ac.uk>
    Subject: Linneighbourhood

    Hi, sorry to bother you with inconsequential mail but I think you of all people should know this !! I'm looking for some software called Linneighbourhood. It's a network neighbourhood browser for Linux. I have scoured the net for it but to no avail !! Can you help??

    Thanks

    G Wood - UK

    [I haven't heard of it. UNIX traditionally has not had "Network Neighborhood" type of software. The user is expected to know by other means (e.g. a list) which servers are available and what their domain names are. There may be third-party products which do this, but I'm not familiar with them. -Ed.]


     Thu, 28 Oct 1999 17:15:32 +0200 (CEST)
    From: =?iso-8859-1?q?jonathan=20sainthuile?= <sainthuile@yahoo.fr>
    Subject: informaton linux

    bonjour,

    Je me presente, Mon nom est Jonathan je suis lyceen(17 ans)et je suis fou de l'informatique.

    D'apres une information sur internet j'ai cru comprendre que votre site ("linuxgazette") offrait la possibilite, de recevoir par E-mail les nouveautees du systeme d'exploitation LINUX.

    Je suis moi-même futur "linuxien" et je suis avourais-je encore neophite en ce concerne Linux. J'aurai aime, si cela vous est possible, recevoir des information au sujet du language, de la difference avec Windows, des probleme a eviter...

    Je vous remercie d'avance pour votre reponse et vous souhaite une bonne continuation

    Sainthuile Jonathan


     Thu, 28 Oct 1999 17:15:32 +0200 (CEST)
    From: thandor <thandor@cin.net>
    Subject: Terminal Emulators

    I have a linux shell from my Win98 machine via a terminal login. I am presently using telnet to do this, however this causes profound graphical errors, no color, and other problems. I am looking for a better terminal. Any suggestions?

    Thanks
    Thandor


     Thu, 28 Oct 1999 17:15:32 +0200 (CEST)
    From: <Vikrantj@niit.com>
    Subject: linux clickability in windows NT Domain

    I have a linux red hat 5.2 running machine an windows NT 4 domain. Now the machine is completely on the network and it is visible in the network neighbourhood of other window 95, NT computers but when I click on the linux machine it says "network path not found". Now if I search for the machine by using its IP address then it click on the searched IP address the machine icon opens up allowing me to browse the shares. Actually after making the necessary changes to the smb.conf file when I gave the

     smbpasswd -j DOM -r DOMPDC
     
    command (DOM stands for my domain and DOMPDC stands for my pdc netbios name), it gave an error and did not allowed me to join the domain.

    What could be the reason, if anyone can help?


    General Mail


     Fri, 24 Sep 1999 22:38:11 -0600 (MDT)
    From: Phil Hughes <phil@ssc.com>
    Subject: Microsoft demonstrates Caldera (humorous)

    I talked to Jay Green at the Seattle Times about the Microsoft Linux web page. He felt, as I did, that Microsoft just doesn't know how to "control" Linux because they can't just buy it.

    He pointed me to a web page on the Microsoft trial. He was at the particular event I am including below. He said it was the best commercial he had heard for Linux in general and specifically Caldera Linux. The guy presenting works for Microsoft.

    (From microsoft.com/presspass/trial/transcripts/jan99/01-25-pm.htm)

    "HELLO. MY NAME IS VINOD VALLOPPIL, AND I'M A PROGRAM MANAGER IN THE PERSONA AND BUSINESS SYSTEMS GROUP AT MICROSOFT. THIS IS A DEMONSTRATION OF THE CALDERA OPENLINUX OPERATING SYSTEM, A NON-MICROSOFT OPERATING SYSTEM FOR PERSONAL COMPUTERS. THE DEMONSTRATION WILL SHOW THAT CALDERA'S OPERATING SYSTEM PROVIDES EFFECTIVE FUNCTIONALITY FOR TYPICAL END USERS.

    I HAVE INSTALLED A COPY OF CALDERA'S OPERATING SYSTEM ON THIS STANDARD PERSONAL COMPUTER AND ACCEPTED ALL DEFAULT SETTINGS A WELL AS INSTALLED A SET OF END-USER APPLICATIONS BUNDLED WITH CALDERA'S OPERATING SYSTEM.

    I'M CURRENTLY DEMONSTRATING CALDERA'S OPERATING SYSTEMS' GRAPHICAL USER INTERFACE. THE GRAPHICAL USER INTERFACE, OR GUI FOR SHORT, AS PROVIDED BY CALDERA TO INSURE THAT THE OPERATING SYSTEM IS EASY TO USE AND IS COMPETITIVE WITH MICROSOFT'S WINDOWS OFFERING.

    A QUICK TOUR OF THE SCREEN DEMONSTRATES THAT IN MANY RESPECTS, CALDERA'S OPERATING SYSTEM LOOKS JUST LIKE MICROSOFT WINDOWS. CALDERA'S OPERATING SYSTEM HAD A START MENU AT THE BOTTOM OF THE SCREEN LISTING INSTALLED PROGRAMS AND MAKING IT VERY EASY TO SELECT AND RUN THESE PROGRAMS; A TASK BAR AT THE TOP OF THE SCREEN LISTING PROGRAMS THAT ARE CURRENTLY RUNNING ON THE COMPUTER; AND FINALLY, AN ARRAY OF ICONS ON THE UPPER LEFT PORTION OF THE SCREEN PROVIDING USERS A QUICK WAY TO RUN PROGRAMS OR ACCESS INFORMATION ON THEIR HARD DISK.

    LIKE MICROSOFT WINDOWS, CALDERA'S OPERATING SYSTEM PROVIDES A SERIES OF ACCESSORY APPLICATIONS FOR CONSUMERS' DAILY ACTIVITIES SUCH AS EDITING DOCUMENTS AND WRITING E-MAIL. FOR EXAMPLE, I WILL EDIT A QUICK DOCUMENT, TYPIN `THIS IS A TEST.'

    I WILL ALSO CREATE A QUICK SAMPLE E-MAIL MESSAGE.

    CALDERA'S OPERATING SYSTEM HAS A GROWING LIST OF THIRD-PARTY APPLICATION SUPPORT AND CORPORATE BACKING, INCLUDING, BUT NOT LIMITED TO, COMPANIES SUCH AS NETSCAPE, INTEL, ORACLE, SUN, AND IBM.

    IN ORDER TO CREATE A MORE COMPETITIVE OFFERING TO WINDOWS, CALDERA'S OPENLINUX OPERATING SYSTEM, IN PARTICULAR, BUNDLES A NUMBER OF THESE THIRD-PARTY PROGRAMS.

    A CRITICAL CLASS OF APPLICATIONS WHICH ARE VERY POPULAR WITH CUSTOMERS IS THE OFFICE PRODUCTIVITY SUITE. MICROSOFT'S OFFERING IN THIS CATEGORY IS MICROSOFT OFFICE. OTHER COMPETITORS TO MICROSOFT OFFICE WHO BUILD ON THE WINDOWS PLATFORM INCLUDE COREL, IBM, AND STAR DIVISION OF GERMANY.

    CALDERA'S BUNDLES STAR DIVISION'S PRODUCTIVITY SUITE WITH THEIR OPERATING SYSTEM. IN THIS CASE, I HAVE STAR OFFICE FOR CALDERA'S OPERATING SYSTEM ON SCREEN. LIKE MICROSOFT'S POPULAR OFFICE SUITE, STAR OFFICE PROVIDES AN INTEGRATED SUITE OF APPLICATIONS INCLUDING WOR PROCESSING LIKE MICROSOFT WORD, A SPREADSHEET PROGRAM LIKE MICROSOFT EXCEL, AND A PRESENTATION GRAPHICS PROGRAM LIKE MICROSOFT'S POWERPOINT.

    STAR OFFICES'S APPLICATIONS IN THIS CATEGORY NOT ONLY PROVIDE FULL-FEATURED PRODUCTS, BUT THEY'RE ALSO INTEROPERABLE WITH POPULAR WINDOWS PRODUCTS. FOR EXAMPLE, I WILL NOW IMPORT A RICHLY FORMATTED DOCUMENT CREATED IN MICROSOFT WORD INTO STAR OFFICE RUNNING ON CALDERA'S OPERATING SYSTEM. NOTICE THAT NOT ONLY THE TEXT OF THE DOCUMENT WAS ABLE TO BE IMPORTED INTO STAR OFFICE, BUT ALSO FEATURES SUCH AS RICHLY FORMATTED SECTION HEADINGS--IN THIS CASE, THE BLUE BOLD-FACED TEXT WITH THE LINE ABOVE IT--AND EMBEDDED GRAPHICS, THE CIRCLE WITH THE WORD "PRINTER" INSIDE OF IT.

    LIKE CALDERA'S GRAPHICAL INTERFACE, STAR OFFICE ALSO BENEFITS STRONGLY FROM CUSTOMERS' EXPERIENCE WITH MICROSOFT PRODUCTS. THE STAR OFFICE PROGRAMS HAVE BEEN DESIGNED TO LOOK LIKE, AND WORK LIKE, MICROSOFT OFFICE. AS JUST ONE EXAMPLE, DOCUMENT FORMATTING FEATURES SUCH AS BOLD-FACE TYPE, UNDERLINE TYPE, AND ITALICS ARE QUICKLY AND EASILY AVAILABLE TO THE END USER WITH JUST A SINGLE MOUSE CLICK ON BUTTONS THAT LOO VERY MUCH LIKE AND ARE LOCATED ON A TOOLBAR JUST LIKE THE BUTTONS THAT PROVIDE THESE FEATURES IN

    IN SUMMARY, I HAVE DEMONSTRATED THAT CALDERA'S OPERATING SYSTEM IS: FIRST, POWERFUL AND EASY TO USE; SECOND, THAT THERE ARE SIGNIFICANT THIRD-PARTY SUPPORT IN BOTH SOFTWARE AND HARDWARE COMPANIES; AND FINALLY, THAT CALDERA'S PRODUCT BUNDLES A STRONG OFFICE PRODUCTIVITY SUITE FROM STAR DIVISION WHICH IS NOT ONLY INTEROPERABLE WITH MICROSOFT PRODUCTS, BUT IS ALSO DESIGNED TO WORK AND LOOK LIKE MICROSOFT PRODUCTS SO THAT USERS OF THESE PRODUCTS WILL BE COMFORTABLE AND PRODUCTIVE USING THESE PROGRAMS.

    THIS CONCLUDES THE DEMONSTRATION OF CALDERA OPENLINUX OPERATING SYSTEM VERSION 1.3."

    --
    just fyl


     Fri, 24 Sep 1999 22:38:11 -0600 (MDT)
    From: s. keeling <keeling@spots.ab.ca>
    Subject: http://www.linuxgazette.com/index.html

    Yes, I have a comment.

    YOU IDIOT! Sorry, don't take it personally; it's just a figure of speech.

    I'm at the url listed in the subject. I'm supposedly at the cover page of the latest issue. What do I do now? Where the @#$%^&* is the "Next Page" button. How do I turn the page on your so-called Gazette?

    Oh, maybe you have to hit the "Back" button to get to the page where you can select the next page ... Well that's stupid!!?!?!?

    Sorry, it had to be said. Rant off. Sorry if I offend. I'll go look at content now, thanks.

    [There is a single Front Page that is shared by all the issues. That is why it doesn't have an issue-specific "next page" button. It is a bit confusing, but it has been that way since long before I started editing the Gazette, and we haven't received any other complaints about it. -Ed.]


     Sun, 26 Sep 1999 12:14:59 +0000
    From: Benjamin Smith <bens@saber.net>
    Subject: Letter format extension?

    OK, you're over-worked, underpaid, and would probably find yourself split a thousand ways from Sunday if you actually tried to implement 1/100th of the suggestions you get.

    That said, I have another one for you.

    The letters column would be, IMHO, easier to read if there were talk-backs after each letter, or that could be easily linked to, so that suggestions for solving a problem can be perused before making my own.

    More like a newsgroup, or the talk-backs at the end of /. articles.

    -Ben

                        ("`-''-/").___..--''"`-._    (Simba)
                        `@_ @  )    `-.  (        ).`-.__.`)
                        (_Y_.)'  ._    )  `._ `. ``-..-'
                    _..`--'_..-_/  /--'_.' ,'
                  ((().-''  ((().'  (((.-'        Benjamin Smith
    
    [The problem is, Slashdot is read only at one web site, whereas the Gazette is also read from mirrors, mirrors of mirrors, CDs, etc. Thus, the only common denominator is standard HTML. No CGI scripts, no databases, no Javascript. The current Mailbag + 2-Cent Tips system seems to be the best way to ensure that everybody can read all the responses.

    An indexing system to link letters and responses would be nice. Heather has already implemented an index for the most frequent Answer Guy questions. If somebody has an idea how to do something like that for the Mailbag letters, we can consider it. However, sorting the letters into subjects and putting in direct links to each letter in the back issues would be a lot of work.

    We are looking at ways to improve the Gazette and make it easier to navigate, but only in ways that won't leave a portion of our readership out.

    In the meantime, to find help with a problem, use the Answer Guy index, the Index of All Issues (linked at the bottom of the issues list on the Front Page), and the search engine linked in the middle of the Front Page.

    P.S. Nice tiger. -Ed.]


     Tue, 28 Sep 1999 11:37:09 -0600
    From: Dale Offret Jr. <doffret@silverstar.com>
    Subject: Spanish Translations

    Dear sir or madam,

    In reading the October issue of the Gazette I got side tracked into the mirrors site and the French translation sites. I looked for Spanish translations and didn't find any.

    My question is anyone currently developing a Spanish site? If so, who? If not, what criteria need to be met for someone to offer a translated issue?

    Background:

    I am a college graduated with an associates degree in Information Systems. I have avidly read the Gazette for the last 2 to 3 years. I am not a native Hispanic, but my grandfather was born in Mexico. From Dec. 1992 to Dec. 1994 I served as a church representative to Costa Rica in Central America where I learned a great deal of Spanish. I also took 3 years in high school.

    I am looking for ways to improve my Spanish and I believe this could help me.

    Thank you for your time.
    Sincerely,
    Dale Offret Jr.

    [There are no Spanish translations I know of.

    There are no criteria requirements. If you do a translation, we will gladly put a link to it on the mirrors page.

    You may wish to work together with one or more of the Linux Users groups in the Spanish-speaking countries. Perhaps you could arrange a deal with them where you would translate the articles and have a native proofread them. There is a directory of users groups at www.linuxjournal.com/glue, "Users Groups (GLUE)". -Ed.]


     Mon, 4 Oct 1999 07:24:21 +0800
    From: u <leeway@kali.com.cn>
    Subject: redhat in pirated China

    Although legitimate software vendors sell Linux in China, the price is much higher than that of pirated CD-ROM vendors. These vendors are major source of software for Chinese. (Yes piracy rate is very high.) They play cat-mouse game with the government agency. No matter it is Win98, NT, VC or freeware, the price is the same: 10 yuan or 1.2 US$ each CD.

    Because of the underground nature of these vendors, they dishonestly label the CD-ROMs. Here redhat is dominant in Linux category. In fact the makers are so preoccupied with redhat that they describe FreeBSD as "in a redhat series". Although the new redhat version is 6.0, they have 6.5. I don't care because the "6.5" CD is an authentic 6.0. And I understand why they label them that way: they have already sold "6.0", "5.5" redhat CD, which actually are variations of 5.1 or 5.2. One "5.5" redhat CD description proudly claims that it is "completely cracked". I do notice that 5.2 installer select some Chinese tools by default. But I am very unhappy because Netscape can't run. I realize much later that the file size is incorrect although it could be installed. Recently "6.51" has come to the shelf. It has 2 CDs. The 2nd one has StarOffice, Oracle and DB2. I wonder when RedHat could make a official presence here. But would that help? The official redhat CD is 50 US$. It can't beat 1.2 US$.


     Wed, 06 Oct 1999 01:14:22 +0200
    From: David Fauthoux <david.fauthoux@free.fr>
    Subject: Merci

    A little mail to say you that your work is EXCELLENT ! My article has a very very good traduction ! (I thank Jason Kroll) Your gazette is clear, interesting and very well managed.

    In french : Bravo !

    I hope I will be able to send you another article to join your great work !

    Thanks again,
    David

    [David was the author of the Bomb ô Bomb article in issue 46. -Ed.]


     Wed, 13 Oct 1999 17:27:57 -0400
    From: Dan Dippold <dann@mich.com>
    Subject: comments, criticisms, suggestions and ideas.

    Search... ?

    How Mr. Coldiron is going to replace his desktop OS with Linux as he uses Visual Basic on it and Access ("he has some ideas, more on that in a later issue" he says) is what I *would* search for.


     Thu, 21 Oct 1999 21:40:57 -0700
    From: Ron Tarrant <rtarrant@northcom.net>
    Subject: A Suggestion

    Hi there! I read your magazine and I think it's great. I've picked up quite a few tips from the Answer Guy and I really like his column. But...

    I'd really like to see a separate index page for the Answer Guy with links to all his stuff in all the issues organized by subject. It would make it a lot easier to find articles on specific topics. Heck, it might even make an interesting book when enough information has been gathered. If you would consider this, I'd be most grateful. Thanks for a great magazine!

    [Heather Stern's time machine was busy this month (to borrow a phrase from the Python newsgroup), and she has already implemented the Subject Index you seek, See the Answer Guy column in this issue. -Ed.]


     Mon, 25 Oct 1999 15:44:04 +0530
    From: balaviyer <n1040233@bom7.vsnl.net.in>
    Subject: regarding subscription

    Dear sir,

    I want 2 subscribe this new group. How do I go about it.

    [This is not a mailing list, so there's nothing to subscribe to. What you see at www.linuxgazette.com is what we do. -Ed.]


     Thu, 30 Sep 1999 12:14:59 -0400
    From: Barry <barry.thoms@ms.rc.gc.ca>
    Subject: Gazette

    I am brand new to the Linux world so this may be a stupid question.

    Why do all of the past issue links for the Linux Gazette goto RedHat and not the issue?

    regards
    Barry

    [The links at www.linuxgazette.com don't do this. If you find a mirror site that's messed up or way out of date, please e-mail gazette@ssc.com with the URL of the site, and I'll try to figure out what's wrong. -Ed.]


     Thu, 23 Sep 1999 17:03:29 +0100
    From: Andrew Bryant <andrew@brilyant.demon.co.uk>
    Subject: Gazette issue 44 AND Netscape

    Hi,

    Could you tell me a reason why Netscape 4.06 suffers indigestion when I ask it to browse my downloaded copy of the HTML version of LG 44?

    The system is a 486 with 32Mb RAM, running RedHat 5.1 with a 2.0.35 kernel. Netscape reads 34% of the file, then stops.The Netscape process (I should say processes, because there seem to be two) still occupies memory, but no longer consumes clock cycles according to top. Nothing on the page responds to mouse or keyboard, and the page doesn't re-draw if you drag another window across it.

    There is still plenty of swap space available - is it possible that Netscape doesn't "know" how to use it?

    Issues 43 & 45 behave themselves. A rational explanation would be welcome, and a cure even more so! I have studied the offerings of the Netscape site, but nothing I read there seems quite to fit these symptoms.


     Sat, 02 Oct 1999 23:58:57 -0600
    From: Doug Dahl <dougdahl@incentre.net>
    Subject: Magazine cut-off

    Dear sir, I use Redhat 5.1 with Netscape 4.05 (or4.02? whatever the default is) and I have the same problem mentioned in your recent edition of having the full page HTML load only about half on your Linux Gazette home page. In a test of this issue I only got about to the "Linux and the future" article by Husain Al-Mohssen about to the middle of the second paragraph with a file size of roughly 299535. Lately I have taken to reading the gzipped text files especially as I can read them offline but thought I would mention this problem since apparently someone else has this problem. As to the size I was routinely able to read transcripts of the MS-DOJ depositions up to about 400KB with no problems (except those apparently on their end and not related to size at all)

    Sincerely,

    Doug Dahl

    [I don't know why some people's files are getting cut off halfway. Any idea? -Ed.]


     Thu, 7 Oct 1999 16:55:51 -0700
    From: John Cockerham <jcocker@silverlink.net>
    Subject: Linux Is Not For You

    Bravo to Mr. Nod for his article Linux Is Not For You in issue 46. I too am going through the growing pains of using LINUX for the first time. My cousin, a big time UNIX systems engineer, wants me to do a little project for him. He has an accounting program written in DBaseIII that he wants ported to PostgreSQL. This sounds feasible, and I should have the requisite knowledge since I earn a living as a SQL Server and Oracle DBA and an NT systems administrator. A database is a database and SQL is mostly SQL. How hard could it be?

    I added a third computer to my home NT network and attempted to install LINUX. The installation may have been successful, I will never know because when faced with the dot prompt, I didn't know what to do. None of the DOS commands or even the old RTE commands I could think of would work, so I turned it off. My cousin took the machine to his house and got it running on his network. He even put a little 'red hat' sticker on it. He tried to explain why it was necessary to set up a partition for the kernel and a partition for the swap file and a working partition and so on and then handed me a book about the size of a Manhattan phone directory that was supposed to explain everything. I stuck the computer on my home network and fired it up. It promptly froze up. Since I couldn't get it to do anything, I turned it off. "Worst thing you could do" he told me when I called the next day when it wouldn't boot up.

    I installed another release of Red Hat 6 I had ordered from an online auction. It installed immediately and to my great joy even had a desktop-like interface complete with a start button. This looks great, but I still can't make it do much. At least the mouse works. The next weekend he visited my house, and got the computer to see the network. Even he was not willing to try and get the printer to work over the network, and instead brought a nice HP print sharing device. I asked about installing PostgreSQL and he assured me it was easy. "Just mount the CD and install the RPMs" he tells me. RIGHT!

    LINUX People, I am really trying hard, but I agree with nod. LINUX is NOT user friendly. Now I know all of you true believers are thinking "What a Wimp, you should have seen how hard it was back at release 2". I do know that I could have taken a brand new computer virgin and had them somewhat productive in an NT-SQL Server environment in the time I have spent just trying to learn to copy a file and mount a drive. I still have not been able to start with my conversion project since I don't have a database I can talk to yet. I realize that if I had been brought up in a UNIX environment, this stuff would be second nature by now, but I wasn't. I still haven't had the nerve to shut the damn computer down again, because I hate to have my cousin make the hour drive over to get it started again.

    I am going to keep working at it, but right now I think the motto "You get what you pay for" is true.

    [Thanks for your letter. It was exceptionally well written.

    Those of us who are tekkies need to keep in contact with people who are new at Linux, in order to get it to the point of being user-friendly. But we also tend to believe that the more you puts into learning how your OS works (any OS), the more you will get out of it.

    #BEGIN "rant" {
    MS and some other OS companies unfortunately does not encourage people to do this. In fact, they actively discourage it. Both at a marketing level ("You don't need to know anything about this computer; just plug it in and it will work." [which is of course a big lie for any OS]; "You don't need special training to become an NT administrator, unlike UNIX."), and at a technical level (it's hard to tell what Windows is doing behind the scenes when it boots, or why it crashed, and what to do if those configuration dialogs don't have the options you need), and also at the legal level ("Reverse-engineer this and we'll sue you.") This may be fine for an embedded appliance, but it seriously limits one's ability to use one's computer at its potential, or to fix it yourself if it goes bust.

    My dad always used to say, "Why can't a computer be like a car? All cars have a steering wheel that works the same, ditto for the clutch and gearshift." The trouble is, we only ask a car to do one thing: go somewhere. Running the myriad of applications we expect from a computer is a whole different ballgame. Plus, the computer industry isn't nearly mature enough to come to the level of standardization that cars have. Macs have a different keyboard and user interface than PCs because somebody though it would "work better". There is not enough agreement on what the ideal user interface should be.
    } END "rant".

    Now I know all of you true believers are thinking "What a Wimp, you should have seen how hard it was back at release 2".

    There will always be people like that.

    I realize that if I had been brought up in a UNIX environment, this stuff would be second nature by now, but I wasn't.
    #BEGIN "personal opinion" {
    I started using UNIX in 1990, through a shell account at an ISP I got specifically to learn UNIX on. I'm very, very glad I did. It is the standard, the common denominator linking all other parts of computerdom, not in the least because it's the only OS available from more than one company.
    } END "personal opinion".


     Wed, 27 Oct 1999 16:31:58 -0700
    From: Arnaud alnoken@mail.dotcom.fr <alnoken@mail.dotcom.fr>
    Subject: Linux is not for you

    Dear nod,

    Ouch ! Harsh blow ... I have just read that article you wrote in Linux Gazette (issue 46 - '1999/10) about Linux. So, let me introduce myself as well. I am technically a Linux/Unix technical-savvy ; by business need, a buggy-Windows user. And i very much dislike what you wrote, except for one single thing : this is just true. You really insulted the Linux community, but just by speaking common sense.

    Linux developpers tend to think that a sane product is enough. But it is like food. You can spend time preparing a good and sane traditional european or asian (say french or japanese) meal, and take your time to savour it. Or you just can go to those unsane, too fat, pre-digested tasteless (say McDo or 'fish and chips') fastfood. Everybody knows what is good for one's health. But everyone runs into the place where you can fill up your belly instantly. Same is true for using computer. Microsoft sells shit. But more people can use it nearly right-out-of-the-box. Linux guys make things that work, while Microsoft sells stuff that is somewhat usable. You are right when you tell that Linux guys should study the other camp's strengthes and weaknesses.

    Nevertheless, you are wrong when you say that Windows is slower than Linux. This may be true in your example, but Linux does not spend so much time managing an intuitive interface. Stability always have a performance penalty. Having Windows much stabler would make it much slower, because that would mean the OS spend some time monitoring itself, preventing its own crashes by auto-repairing at run-time, like those big companies' giant mainframe computers (or phone ompanies switches) do (Alcatel switches' OS can spend up to 65% of their time preventing a crash, because if this happens, that could stop thousands of active communications, and guess why MVS is as big as NT 4.0 with just a tiny subset of its functionnality, no graphical; interface, ...).

    Yet, globally, you expressed what i have been thinking for some years now : Unix or Linux are bound to disappear if they just concentrate on their strengthes, and do not try at the same time to outperform Microsoft's skills. Something that can be done. NeXTStep, eight years ago was superior to what Windows95 was seven years later. But NeXT was a small company in a market already overwhelmed with Bill Gates financial power. A commercial newcomer could not breathe in such rarefied market. Free software can. If free software does not really try to compete, then some Japanese commercial corporation are the only ones that can fight Microsoft. Sega, Nintendo and Sony have begun development that will turn their play console into consoles that will actually marginally be used to play. Thoses devices will be build around the Net, so they will manage documents. Then, what else all their computing power could be used for, apart for creating, editing, those, managing your mail, calendar and contacts ? And those corporation do know how to make consumer products. Even Windows is not such a product, because you yourself stated that people needed your help to sort out the mess that sometimes occur.

    Their was the huge stable mainframes, then lighter minicomputers, then microcomputers. All professional-world-rooted devices. Microsoft pushes the enveloppe against those with Windows2000 (formally NT), a product that explodes in market shares. But next is everybody's information device. This is the consumer market. The one that counts. Microsoft is already here with WindowsCE, but prepares the true attack with its yet- to-come XStation (a Microsoft machine !). The competitors are rare : three Japanese ... and no one else.

    Where is Linux ? Linux claims for World domination. World domination means dominating both markets : the end-user device and the remaining of the infrastructure. As history tells, dominating the end-device helps dominating the rest of the infrastucture, just because it makes so many more people mastering the technology (even if it is comparatively less limited in terms of functionality), and then so much more skilled technicians on the job market. Only exceptions are surviving old dinosaurus IBM and certainly Microsoft in the future. Unix mini-systems greatly reduced the mainframes number by infiltering them credibly through Amdahl mainframes, and relegating them to niche markets (namely big corporations' strategic central data centers). Windows has come the same way from all those desktops, and now making its way into the infrastructure markets (servers, superservers, routers, datamarts, etc.), restricting Unix to niche markets itself (scientific calculators, and high-end database servers) and FreeBSD and Linux to their proof-of-concept market (mainkly Internet servers and firewalls in corporations that are big enough to justify the hiring of Linux-knowlegeable people). So now you see why Microsoft has invested less money in Windows during the last three or four years than it invested in WindowsCE, interactive TV, now in play consoles, and more generally around consumer markets. You also see why Sega, Nintendo and Sony are to be looked at. Their products are to this date poor in terms of information features, but they are extremely powerful in processing power, and designed from the very first line to be bug-free.

    If Linux wants to count in the future, the Linux community must take this into account : not look at what already exists or is advertised by Bill Gates, but anticipate what is next.

    Remember the car industry. First there was those who designed their own car from scratch, starting in France,around the end of XVIIth century (as computer OS pioneers did). Then, end of XIXth century, those who build their cars from standard pieces, but had to turn dozens switches on (the fuel valve, the battery, the generator, the contactor), and then turn vigourously the to start it (as mainframe/Unix/Linux users do), then those who bought partially assembled car kits, with pieces bought from several providers (as Windows users do). Then there were the ones who bought cars you just had to climb into and turn on the key to use them, because Ford brought that in the first quarter of the century. The latter category now accounts for, say, 99.9 % of car-buyers.

    I guess one day, you will buy your computer, turn it on, feed it with your name, e-mail address, ISP number (or transfering them in a one-keystroke operation from your old machine along with your diary, address book, documents), and start using it, without having to deal with LILO, sendmail, PPP account, and others' idiosyncrases. That day is just around the corner. With or Without Linux.

    Is the community at work making Linux ready ? Or will we satisfy ourselves with Linux remaining in its getting-surrounded computer techies niche ?


    This page written and maintained by the Editor of the Linux Gazette, gazette@ssc.com
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    News Bytes

    Contents:


    News submissions should be sent to gazette@ssc.com in TEXT format. Not HTML, DOC, RTF, etc., please. Instead of a press release, please send a two-paragraph summary of why Linux users would be interested in your product or service, along with a link to your web site.

     November 1999 Linux Journal

    The November issue of Linux Journal will be hitting the newsstands in mid-October. This issue focuses on databases, and includes an interview with Linus, the "mild-mannered programmer, defender of free source, and all-around nice guy", as well as pictures of Linux's creator.

    Linux Journal now has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue67/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/ljsubsorder.html.

    For Subcribers Only: Linux Journal archives are now available on-line at http://interactive.linuxjournal.com/

    Flash!

    Can't find that April 1996 Linux Journal? Someone borrowed the September 1998 copy?

    Now you can have it all! All of Linux Journal (issues 1-56; 1994-1998) and all of Linux Gazette (issues 1-45; 1995-1998) on one archival CD.

    Ordering info in the next Gazette.


    Distro News

    This is a new section featuring news about Linux distributions.


     Red Hat announces Red Hat Linux 6.1

    DURHAM, N.C.--October 4, 1999--Red Hat, Inc. today announced the release of a suite of Official Red Hat Linux 6.1 products. The latest release incorporates easy installation, software update information and access, and improved system management capabilities. Users can move quickly through installation with graphic-based directions, choosing from GNOME, KDE, server or custom interface settings, with seamless integration of software RAID configurations to safeguard critical data and application availability. Additionally, the PXE 2.0 technology (part of the Wired for Management Baseline 2.0) enables Red Hat Linux 6.1 installations to be done across the network, with no need for local media.

    Red Hat Linux 6.1 also provides customers with fast access to the latest software technology from Red Hat through the Red Hat Update Agent, an online customer service application for retrieval and management of software updates.


     Linux-Mandrake "Cooker" (development version)

    Linux Mandrake is proud to announce the availability of its new development version code-named "Cooker". What sets the distribution apart from previous versions is that Linux Mandrake has now adopted a new open style of development that allows real-time updating of the distribution based on the contribution of both it's dedicated staff and the user base.

    Linux Mandrake Cooker is aimed at the following audiences:

    Cooker is now available online at: www.linux-mandrake.com/cooker or on CD.


     Storm Linux Beta Released

    Vancouver, Canada -- October 25, 1999 -- Stormix Technologies announces the official beta version of Storm Linux. The final release is scheduled for November 1999.

    "What we learned from the alpha," says Bruce Byfield, Product Manager for Stormix Technologies, "is that users want a Linux install that's easy but not dumbed down.

    "For example, in the alpha, users were forced to install the Linux Loader and could only do so on the master boot record. However, alpha testers told us very clearly that they wanted more flexibility. So, in the beta, we've given it to them. At the same time, new users can simply accept the defaults and quickly get a workable system. We've tried to balance flexibility and ease of use all down the line."

    Another major feature of the Storm Linux is the Storm Hardware System. SHS automatically detects PCI devices, including video and network cards, SCSI devices, and USB bridges.

    "The information collected during alpha testing," Lindsay says, "has greatly extended our hardware compatibility. As a result, we'll be putting our database on the web, so that users worldwide can request and receive support for their cards. Our goal is to make Storm Linux the most complete Linux hardware solution available."

    Other features of the beta include GUI modules for networking, dialup, and adding users. "These modules," Byfield explains, "are simply the first glimpse of what Stormix is planning. The final release will include other modules that weren't ready for the beta."

    Copies of the beta are being mailed to registered testers. Copies can also be downloaded from the Stormix web site.

    Founded in February 1999, Stormix Technologies is a Linux development Canada based in Vancouver, Canada. Its flagship product is Storm Linux, an enhancement of the Debian GNU/Linux distribution.


     Mandrake/Panoramix

    Panoramix is a new installation procedure, allowing easy installation of Linux-Mandrake. It has just been integrated in Cooker, our experimental distribution. Panoramix is entirely written in Perl which is interface independent, offering contributors an easy and flexible way to contribute. This beta version features the integration of Diskdrake, a complete hard-drive partitioning tool, that offers users a simple graphic tool for completing the painful partitioning phase while installing Linux.


     Debian computers available in UK

    Space-Time Systems is currently offering three PC models with Debian 2.1 (Slink) pre-installed. STS actively supports Free Software and the development of GNU/Linux by donating 3% of the retail cost of each system sold, split equally between the Free Software Foundation and Software in the Public Interest, Inc.

    All systems are supplied with A Beginner's Guide to Using GNU/Linux help-sheet, GNU/Linux software on CD's, plus a boot floppy. All GNU/Linux systems are ready for use, with the X server and a Graphical User Interface (GUI) already configured. www.spacetimesystems.dial.pipex.com/.


     Red Hat Expands Board of Directors, Strengthens Development Group

    Durham, N.C.--October 12, 1999--Red Hat, Inc., today announced that Kevin Harvey, General Partner of Benchmark, has joined the Red Hat Board of Directors and Walter McCormack has joined the company as head of Corporate Development.

    Harvey brings more than 15 years of emerging technology company experience and vast knowledge of the computer industry to Red Hat. McCormack brings a strong background in investing, advisory and financing services to Red Hat.


     Other distribution news

    Caldera

    Debian

    Expert Linux

    Peanut Linux

    Red Hat

    SuSE

    TurboLinux


    News in General


     Linus sees a future full of free operating systems

    CNet article in which Our Hero discusses what he thinks Open Source will -- and will not -- do for the computer industry. He discusses the less- successful-than-expected Mozilla project, and disparages Sun Microsystems' use of the term "open".

    Thanks to The Linux Bits #20 for bringing this article to our attention.


     Upcoming conferences & events

    Alternative Linux 1999
    November 1-3, 1999
    Montreal, Quebec, Canada
    www.alternativelinux.com/ (French)
    www.alternativelinux.com/en (English)
    
    USENIX LISA -- The Systems Administration Conference
    November 7-12, 1999
    Seattle, WA
    www.usenix.org/events/lisa99
    
    COMDEX Fall /
    Linux Business Expo
    November 15-19, 1999
    Las Vegas, NV
    www.comdex.com/comdex/owa/event_home?v_event_id=289
    
    The Bazaar: "Where free and open-source software meet the real world".
    Presented by EarthWeb.
    December 14-16, 1999
    New York, NY
    www.thebazaar.org
    
    SANS 1999 Workshop On Securing Linux. The SANS Institute is a
    cooperative education and research organization.
    December 15-16, 1999
    San Francisco, CA
    www.sans.org
    


     Magic software has learned a lesson about penguins

    Magic Software announced that it has made a $10,000 donation to the Wildlife Conservation Society for the preservation of penguins. In addition, the Company stated it will no longer use live penguins to promote its Linux products. Magic created quite a "flap" recently when it usedtwo live penguins at the LinuxWorld Expo in San Jose to introduce its new business-to-business e-commerce solution, Magic eMerchant for Linux, for this rapidly growing operating system whose symbol is a penguin.

    The controversy started last August when Magic brought two live trained penguins, named Jeffrey and Lucinda, to the San Jose Convention Center to open the trade show floor, as well as introduce each of the Companys hourly demonstrations of its new eMerchant product. For five minutes at the start of every hour, the birds trainers would allow people around the booth to take pictures of one of the birds (the birds alternated times in the booth), as well as pet the bird. This "ruffled the feathers" of some trade show attendees who later called PETA (People for the Ethical Treatment of Animals) to voice their concerns.


     Netwinder news

    OTTAWA, ONTARIO - September 13, 1999 - Rebel.com Inc. announced that it has entered into a technology and distribution agreement with KASAN Electronics Corp., a leading manufacturer and distributor of PC peripherals and electronics. The agreement provides KASAN with exclusive rights to market and distribute the NetWinder OfficeServer throughout the Pacific Rim.

    Due to the increased number of Internet users, the Linux thin-server market is expected to grow rapidly in Korea. The Korean government supports the development of Linux OS-based servers, resulting in Linux-based servers being a very affordable alternative for both business (SOHO) and personal usage. It is also estimated that there will be over 100,000 Web hosting operations in Korea by year-end.

    "We are very pleased to be able to handle sales and marketing for the NetWinder in regions like China and Japan," said Jay Park, director of marketing and development for KASAN Electronics. "The thin-server market is one of the most rapidly growing in the network market, we are confident that by adding our expertise and services to the NetWinder we will achieve World leader status within three years."


     Tri-Centrury Dynamic Development Objects for Java

    Wideman, Ark. -- Tri-Century Resource Group, Inc. (TCRGI) announced the immediate availability of Dynamic Data Objects for Java (DDOtm), a set of Java development tools that allow adjustments to any DDO-based enterprise application with minimum intrusion and testing. A free 30-day DDO demo is available at www.tri-century.com.

    Development and testing took place on a Red Hat 5.2 system using Java 1.1. and Java 1.2 from www.blackdown.org.


     Cobalt Networks Announces RaQ 3i Server Appliance

    Cobalt Networks, a developer of server appliances, has introduced its third-generation server appliance, the RaQ 3i, today at ISPCON. The RaQ 3i expands Cobalt's RaQ product line by providing the ideal server appliance for high-traffic Web sites, e-commerce, and application hosting. Designed with ISPs and small to mid-sized businesses in mind, the RaQ 3i further solidifies Cobalt's reputation for providing server appliances that offer powerful performance, great return on investment and a low total cost of ownership.

    "The new Cobalt RaQ 3i delivers a compelling server appliance platform for Intershop commerce products," said Ed Callan, Vice President of Marketing at Intershop. "Cobalt's customers will now have access to powerful e-commerce solutions based on Intershop's industry leading sell-side e-commerce solutions and the cost-effective, easy-to-manage, and scalable RaQ 3i. The RaQ 3i with Intershop Hosting and Merchant uniquely deliver e-commerce for ISPs and businesses." Cobalt designed the RaQ 3i with an open source design and extensible architecture making it easy to integrate, deploy, and support Internet and network-based applications. Cobalt server appliances are pre-configured with the Linux operating system and provide the core web publishing, email, and file transfer services upon which ISPs and developers can build their solutions. The Cobalt RaQ 3i significantly extends the range of available applications for the Cobalt RaQ product line.


     Cobalt Qube/RaQ get Knox Arkeia backup

    Burlingame, Calif. - September 1, 1999 - Knox Software and Cobalt Networks announced today the availability of Arkeia software for Cobalt RaQ and Qube family of products. Arkeia provides a comprehensive solution for ISPs and corporations to protect data. Its unique transaction engine allows multiple backups and restores to be performed simultaneously with total reliability.

    Arkeia provides incremental and full backups, scheduled or on demand, and preserves directory structure, registry, symbolic links and special attributes. Arkeia utilizes an exclusive multi-flow technology to deliver speeds that are 200 to 300 percent faster than rival software packages. Its Java interface enables the system administrator to manage multiple remote backup servers through the Internet as if they were local backups.

    Pricing for Arkeia 4.2 starts at under $600. A configuration protecting 2 - type 1 computers (UNIX, NT Server), 5 - type 2 computers (Linux, Win 95/98), and utilizing a single tape drive costs less than $1,000. Cobalt RaQ and Qube customers can download the Arkeia software and purchase the package online at www.arkeia.com.


     Cobalt partners with Gateway

    SAN DIEGO--Oct. 12, 1999--Gateway Inc., and Cobalt Networks Inc., today announced an agreement under which Cobalt will supply server appliance technologies that enable Gateway to expand its capabilities to provide small-to-medium sized organizations with affordable and turnkey technology solutions designed to leverage the Internet.


     Cobalt Unveils Management Tool

    Mountain View, Calif., October 18, 1999-Cobalt Networks, Inc., a developer of server appliances, today introduced the Cobalt Management Appliance. This system is specifically designed to allow system administrators to monitor and perform management tasks on large installations of Cobalt RaQ server appliances from a single management console.

    By simply using Cobalt's proprietary user interface, system administrators can easily and securely apply software packages to a list of selected RaQs, reboot multiple RaQs remotely, change settings for an entire RaQ server farm, and activate and deactivate FTP, telnet, SNMP, and DNS.


     LinuxCare expands Japanese operation

    Seeking to widen its presence in the already-expanding Japanese Linux market, Linuxcare, Inc. announced Friday that it has entered into a certification, service and support-based strategic partnership with Inter Space Planning Corporation (ISP)...

    www.ecommercetimes.com/news/articles/991004-3.shtml


     National Semiconductor to use Linux in set-top boxes

    Hong Kong - September 27, 1999 - National Semiconductor Corporation has appointed INFOMATEC AG / IGEL Technology Labs to develop Linux-based firmware to port to National Semiconductor's market-leading set-top box and thin-client platforms.


     VA files for IPO

    VA Linux Systems has filed for an initial public offering (IPO) with the Securities and Exchange Commission (SEC). This is the second major Linux-related public stock move, after Red Hat. The option also mentions Andover.net's and LinuxOne's recent IPO filing.

    www.ecommercetimes.com/news/articles/991011-4.shtml


     Ziatech news

    Ziatech Corporation is combining its CompactNET(tm) multiprocessing technology with the recently announced LinuxPCI 1000 Development System, speeding the implementation of Linux-based, multiprocessing CompactPCI systems. The CompactNET version of the LinuxPCI 1000 comes with MontaVista Software's Hard Hat(tm) Linux, an embedded version of Linux. For more information, visit:

    CompactNET open source web site This web site allows users of Ziatech's CompactNET multiprocessing technology to download the open source code drivers for the Linux operating system. The CompactNET source code is being released as open source to foster the standard interoperability of CompactPCI multi-computing solutions from different vendors.


     SCO invests in LinuxMall

    The Santa Cruz Operation has become the largest external investor in LinuxMall.com, one of the 200 busiest sites on the Internet. LinuxMall CEO Mark Bolzern is quick to add that the company will continue its vendor- neutral tradition. The investment will enable LinuxMall to "take LinuxMall.com to the next level and meet the needs of the growing Linux community."...

    www.ecommercetimes.com/news/articles/991014-7.shtml


     Loki Hack Winners Announced

    Atlanta, GA. -- October 15, 1999 Winners of the first annual Loki Hack were announced in an afternoon press conference at the Atlanta Linux Showcase. During the Hack, enthusiastic and talented hackers from across the country and around the world had 48 hours in a secure setting to make alterations to the Linux source code for Activision's popular strategy game Civilization: Call to Power. The hackers had full reign to add features, alter logic, and implement additional library support.

    "This is the closest we could get to Open Source with our commercial products," said Scott Draeker, Loki president and founder. "The world can't see the source, but the contestants did. And all the hacks, mods, and changes will be posted in binary form for free download from our website next week. This was our chance to show the gaming world what the Open Source community can accomplish, and the results have been incredible."

    At the press conference Draeker awarded first prize to Christopher Yeoh, a developer from Denver, Colorado. Yeoh completed several modifications to Civilization: Call to Power, including the addition of extra units such as land carriers and stealth carriers. Yeoh also enhanced the Spy unit by allowing it to infiltrate an enemy city. If successful, the Spy is destroyed, but the player can view the infiltrated city's statistics until payment is received from the enemy.

    First prize is a StartX MP Workstation from VA Linux Systems. Runners-up will receive their choice of Gamer-X sound cards from Creative Labs, Inc., 3950U2 Ultra2 Dual Channel SCSI cards from Adaptec, Inc., and Millennium G400 video cards from Matrox Graphics, Inc. All contestants completed at least one hack and will each receive a prize.


     Linux Getting 'Pervasive'

    Pervasive Software, Inc. has moved its SQL 2000 server for developing e-commerce applications into the open-source arena by making it available to developers working with the Linux environment...

    www.ecommercetimes.com/news/articles/991018-4.shtml


     MacMillan Publishing + SecurityPortal.com = more Linux security

    MacMillan Publishing USA has entered into a strategic alliance with SecurityPortal.com to bring online security technologies to users of the Linux operating system (OS)...

    www.ecommercetimes.com/news/articles/991022-7.shtml


     Intel Advances Linux Support

    Intel Corp. enables online professional users to bring Gigabit Ethernet performance to their Linux-based Internet operations, and is working with the open-source community to foster Internet-enabling product development...

    http://www.ecommercetimes.com/news/articles/991025-5.shtml


     VMware Prepackaged

    WINDOWS NT USERS NOW HAVE QUICK AND EASY WAY TO ACCESS LINUX

    Palo Alto, Calif. -- Windows NT users interested in using Linux now have a quick, easy and painless way to do so. VMware , the leading provider of virtual machine applications for PCs, announced today that it has partnered with leading Linux operating system vendors Caldera, SuSE and TurboLinux to make available their versions of the Linux operating system to customers of VMware.

    VMware is a revolutionary new application that enables personal computer users to run one or more protected sessions concurrently using one or more operating systems on a single machine. This gives users the flexibility to run alternate operating systems and eliminates the fear of system or netw ork crashes, security breaches or virus attacks while doing so.

    Under these initial agreements with Caldera, SuSE and TurboLinux, VMware for Windows NT and Windows 2000 will come with pre-installed evaluation copie s of these companies92 versions of Linux. VMware is currently in discussions with other Linux suppliers to expand Windows NT users92 options even fur ther.


     Linux Links

    www.LinuxFool.com is a support and discussion portal for Linux users. It is an official mirror of the Linux Documentation Project.

    Linux in Algeria (French-language site)

    eExams offers skills testing via the web for companies seeking to screen prospective employees. A Linux System Administrator exam is included among its many IT and non-IT offerings.

    CNet article about Transmeta taking aim at Intel. From The Linux Bits #20.

    The Iozone filesystem benchmark has a new version.


    Software Announcements


     CUPS for Linux

    The first production release of the Common UNIX Printing System ("CUPS") is now available for download. The license is GPL.

    http://www.cups.org

    The Common UNIX Printing System provides a portable printing layer for UNIX operating systems. It has been developed by Easy Software Products to promote a standard printing solution for all UNIX vendors and users. CUPS provides the System V and Berkeley command-line interfaces.

    CUPS uses the Internet Printing Protocol (IETF-IPP) as the basis for managing print jobs and queues. The Line Printer Daemon (LPD, RFC1179), Server Message Block (SMB), and AppSocket protocols are also supported with reduced functionality.

    CUPS adds network printer browsing and PostScript Printer Description ("PPD")-based printing options to support real world applications under UNIX.

    CUPS also includes a customized version of GNU GhostScript (currently based off GNU GhostScript 4.03) and an image file RIP that can be used to support non-PostScript printers.

    Sample drivers are provided for HP DeskJet and LaserJet printers. Drivers for over 1600 printers are available in our ESP Print Pro software.


     Cygnus Announcements

    SUNNYVALE, Calif., October 12, 1999 -- Cygnus Solutions, the leader in open source software, today announced the commercial availability of Cygwin, a UNIX/Linux shell environment and portability layer enabling delivery of open source projects to Windows. Cygwin provides corporate IT and software developers a solution for integrating a heterogeneous environment of Windows and UNIX-based systems. In addition, developers can use Cygwin to quickly migrate applications from UNIX to Windows.

    SUNNYVALE, Calif., October 12, 1999 Cygnus Solutions, and Integrated Computer Solutions (ICS) today announced a strategic agreement to integrate ICS Builder Xcessory PRO (BX PRO) with Cygnus Code Fusion9 Integrated Development Environment (IDE). This agreement provides Linux software developers with the first commercial IDE with graphical user interface (GUI) builder development capabilities.


     Running Windows NT applications on Linux

    San Jose, CA. (October 18, 1999) - In a move to dramatically accelerate the expansion of business-critical applications available on the Linux platform, Mainsoft Corporation, the leader in cross-platform solutions for the enterprise, today announced it is developing a version of MainWin for the Linux environment. MainWin is Mainsoft's Windows platform for UNIX operating systems. MainWin allows software developers to re-host Windows NT applications on UNIX leveraging one single source code for both Windows and UNIX systems.

    The same MainWin technology that has been available for UNIX platforms will be incorporated into the Linux product; to date, more than one million MainWin licenses have been installed worldwide. As an extension of Mainsoft's product offering, the MainWin for Linux strategy will initially focus on the Red Hat Linux operating system with others likely to follow.

    In the coming weeks, a demo will be available for download on Mainsoft's Web site at www.mainsoft.com and the commercial release is scheduled for end of Q1 2000.


     Netscape gets e-commerce security boost

    Article about LinuxPPC's 128-bit encryption for Netscape 4.7 on the Power PC, Netscape's own efforts to boost encryption security, and the Clinton administration's proposal to partly relax US crypto-export restrictions.

    www.ecommercetimes.com/news/articles/991021-2.shtml


    This page written and maintained by the Editor of the Linux Gazette, gazette@ssc.com
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 47 of Linux Gazette, November 1999

    "The Linux Gazette...making Linux just a little more fun!"


    (?) The Answer Guy (!)


    By James T. Dennis, linux-questions-only@ssc.com
    LinuxCare, http://www.linuxcare.com/


    Issue #47 of The Answer Guy

    previous titles sorted by topic!

    (!)Greetings From Jim Dennis

    Contents:

    (?)Logging In
    (?)X Window Networking --or--
    The X Graphical Environment
    (?)Routing, Firewalls, and other "raw" Networking
    (?)Winmodems
    (?)Ordinary Modems and other Useful Serial Devices
    (?)Hard Disk Drives, Filesystems and Partitioning
    (?)CD-ROM, Tapes, and more Removable Media
    (?)LILO, SYSLINUX, and more Boot Loaders
    (?)Mail Servers and Clients
    (?)Other Servers
    (?)Scripting and Programming (including Startup Scripts)
    (?)Sweet Music?
    (?)Non-Linux OS Questions if they didn't fit elsewhere.
    (?)Etiquette and More Social Questions
    (?) If that could possibly have missed it... -or-
    Everything Else

    (!) Greetings from Jim Dennis

    It is amazing to me how busy I have been this month. I answered almost as many messages as in that storm of them wrapping up last year's backlog.

    I especially want to thank everybody this month who piped in to help the Answer Guy with IDE CDROMs under SCSI emulation being really transparent - so transparent, that such CD's have to be referred to as /dev/scd0.

    I'll really give thanks when I can get back home with Heather and our computers. I have my laptop, but it's just not the same. Happy Thanksgiving, everyone!

    [ For those of you who were answered by email during this month - look forward to seeing your messages in print next issue. This list by topic has been requested by several, it's something I've been wanting to do for a while, and I really think you'll find it useful. Also, Each question is still listed only once, so those which might fit more than one section have been listed in the section that best applies. -- Heather]


    (!) Logging In


    (!) The X Graphical Environment


    (!) Routing, Firewalls, and other "raw" Networking


    (!) Winmodems


    (!) Ordinary Modems and other Useful Serial Devices


    (!) Hard Disk Drives, Filesystems and Partitioning


    (!) CD-ROM, Tapes, and more Removable Media


    (!) LILO, SYSLINUX, and more Boot Loaders


    (!) Mail Servers and Clients


    (!) Other Servers


    (!) Scripting and Programming (including Startup Scripts)


    (!) Sweet Music?


    (!) Non-Linux OS Questions


    (!) Etiquette and More Social Questions


    (!) Everything Else


    "Linux Gazette...making Linux just a little more fun!"


    More 2¢ Tips!


    Send Linux Tips and Tricks to gazette@ssc.com


    New Tips:

    Answers to Mail Bag Questions:


    Home Network Domain Name

    Sat, 2 Oct 1999 01:00:45 -0400
    From: Barry <BarryJJ@IBM.Net>

    Discussions of private networks typically point the user at the IP address ranges - such as 192.168... - reserved for private networks.

    But they often also show those networks named something like "...MyHome.Net" Murphy says that any name you pick will eventually be a real domain to which you want access.

    For a private network, you do *not* have to use a ".net", ".com", ".org" ending. I've been happily using an adaption of my street address - i.e., something like ".MainSt123" - for some time, yielding nodes such as Hub.MainSt123 = 192.168.0.1 for a (Linux) gateway, and things like FamilyRoom.MainSt123 for other machines scattered around the house.

    I run things such as DNS (early Bind, now Bind8), Apache, Squid, Samba, etc. on the hub machine and have had no configuration problems from *not* using a standard, 3-character ending.

    And I sleep easy knowing that I'm *not* using something that may also be a *real* domain name ... at least not in the foreseeable future :-)

    Barry Johnson - BarryJJ@IBM.Net


    Spell check script

    Wed, 20 Oct 1999 21:38:20 -0700
    From: David Anderson <davkat1@home.com>

    Here's a little Spell ckeck script, I call it "wspell" you can call "wspell" alone, and anwser the questions or place up to two portions of the word into the command line as in

    wspell re quir
    reacquired
    require
    required
    requirement
    requirements
    requires
    requiring
    

    the requirement of this script is, get the first few letters correct

    wspell (shell script)


    Linux on playstation 2

    Thu, 21 Oct 1999 17:38:57 +0930
    From: 50012176 <50012176@snetad.cpg.com.au>

    hello,
    I just wanted to say, did you know that Playstation 2 is using a Linux interface, while the Dreamcast is using Windows.

    later
    (((LeX)))


    Is that open port a backdoor?

    Sat, 23 Oct 1999 01:38:10 +0200
    From: Pat Bateman <pat99@linustart.com>

    That's what I though the first time I used the program wget. If you don't know why some port is listening and you are a little bit paranoid and think that's a backdoor, try first this command:

    fuser -vn  
    
    This will display the program that opened that port, it's PID and the user who executed it. If you are sure that's a backdoor and want to close it, type this:
    fuser -kn  
    
    This will close this port till the next reboot (unless the backdoor program is runned by cron). Check your system to eliminate the backdoor. Here's my 2cents_tip


    Inspecting packets denied by your firewall

    Wed, 29 Sep 1999 11:04:48 -0500
    From: marc <lowkey@innocent.com>

    I have a firewall, and the logs show when a packet is deined. Denied packets from the internet can be a warning sign. But i became tired of searching through the logs for this info, and the ips were not resolved. So i wrote some scripts that look through a log file, pull out the DENY lines, resolve the ip addresses and remove any duplicates.

    These scripts are perhaps the height of kludgeyness, but they work. I know i like to learn from examples, so maybe this can help others.

    the script to run is show_denied_packets.sh

    This script filters out any lines dealing with my local LAN, because I am only looking for packets from the internet. You may want to set LOCAL_LAN to the ip address of your local lan, if you have one.

    It then calls strip_log.pl

    This perl script takes the info from the log and prints out just the ip addresses and ports involved. This info is then piped into the logresolve program.

    logresolve is a c program that came with my apache, although not compiled. i found it in /var/lib/httpd/support/ . To compile it i ran

    gcc -o logresolve logresolve.c
    
    and then moved the logresolve binary into my bin directory. Its path needs to be set in the show_denied_packets.sh script.

    Finally, I was getting many duplicate entries, so i pipe the info to the unix sort command to sort it all, and the unix uniq command to take out all the duplicate entries.

    And viola! you now have a list of all the computers that tried to send you packets that bounced off your firewall. To keep an eye on this, i put an entry in my crontab to have this info mailed to me once a week. The line looks like this:

    # once a week check for denied packets
    0 2 * * mon /home/marc/bin/show_denied_packets.sh
    

    Using different scripts together is a strength of unix. Still, this is a bit kludgy, and if there is any interest, i could whip all this up into one program.


    A random background selector

    Tue, 14 Sep 1999 16:58:55 -0400 (AST)
    From: Ben Okopnik <ben-fuzzybear@geocities.com>

    Hi -

    First thing, I'd like to thank you for putting out the LG; it's been a mentor/SuperFAQ/"AHA!" generator ever since I first installed Linux, over a year ago. "What a long, strange trip it's been". Thanks to LG (as well as a myriad other Linux sources), I'm now very comfortable (not yet a guru, though) with it, and learning more every day.

    Second - a contribution, if you will. Here's one of the shell scripts that I've written, bkgr; it's been a really nifty gadget for me, selecting random backgrounds for my X-Windows. I hope other folx here will find it of as much use.

    Drum roll, please... :)

    There is lots of configurable stuff in there - graphics prog, window manager, etc. - but the comments should make it sorta simple to adapt. *Hint*: the backgrounds for E-term (this is where about half of my pics came from) are rather bright and wonderful...

    Keep up the good work!


    Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


    ANSWER: Telnet trouble

    Sat, 25 Sep 1999 01:28:37 -0700
    From: Jim Dennis <jimd@starshine.org>

    Dear Jim

    Your email did help me to solve the problem with the telnet in linux. It works fine now. Thanks a million.....

    I have a small doubt. Let me explain...... My network has a NT server, LINUX server and 20 windows 95 clients. I followed your instructions and added the address of all the clients into the /etc/hosts file on the LINUX machine and voila the telnet worked immediately.

    But the NT server was the one who was running a DHCP server and dynamically allocating the addresses to the clients. The clients were configured to use DHCP and were not statically given and ip addresses. I managed to see the current DHCP allocation for each client and add those address into the /etc/hosts file on the LINUX server but my doubt is what happens when the DHCP address for the client changes? Then again we'll have to change the address in the /etc/hosts file right? This seems silly. Is there anyway to make the LINUX hosts file to automatically pick up the DHCP address from the NT server?

    Also another important thing is I am still unable to ping from the NT server to the LINUX server using the name. It works only with the IP address. Is there any way to make the NT DHCP to recognize the LINUX server?

    Well, either you shouldn't use dynamic addressing (DHCP) or you should use dynamic DNS. You could also disable TCP Wrappers (edit your /etc/inetd.conf to change lines like:

    telnet	stream  tcp     nowait  root    /usr/sbin/tcpd	in.telnetd
    
    ... to look more like:
    telnet	stream  tcp     nowait  root    /usr/sbin/in.telnetd in.telnetd
    

    (and comment out all of the services you don't need while you're at it).

    Thanks Jim for all your help....you've become my LINUX guru.............

    Perhaps you should consider getting a support contract (or joining a local users group). I may not always respond as quickly nor as thoroughly as you'd like.


    ANSWER: Why should I care?

    Sat, 25 Sep 1999 07:34:24 -0400
    From: Rick Smith <rsmith13@tampabay.rr.com>

    "R.Smith" wrote:

    Sir, Since my previous letter about Dalnet providers trying to connect to my Linux box via telnet port 23, I have found out that they are also trying port 1080. I have instigated a policy of dropping all incoming connections via a command run by host.deny:
    	/sbin/ipfwadm -I -i deny -S %a
    	
    I hate to do this to my niece, but I don't know of any alternative until these dalnet jerks stop this intrusive practice. Anyway, my niece has moved to other irc providers that don't do this kind of thing.

    Why should I care if Dalnet is trying to connect to ports 23 and 1080? I don't run any services on port 1080 and port 23 is closed via hosts.deny. I care because WITH JUST ONE dalnet user, I sometimes have dozens of syslog messages per day. I have to go through them and decide if there is a problem. I have to run whois, nslookup, traceroute, etc. on them to see if they are bogus. And many of the dalnet domain and IP's ARE bogus.

    I could ignore connect attempts to port 23 and miss that one attempt that really was important. I could ignore port 1080... I could turn off my firewall and let everyone in...

    Imagine what a workload I would have if I was an sysadm with 20-30 people on dalnet.

    It is simpler to just drop all connect attempts and let my niece use other irc services that aren't abusive.


    ANSWER: Compiling network driver

    Sun, 26 Sep 1999 14:01:58 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    Hi Jeff,

    after you compile the network card driver, you should place it an a directory where insmod searches for it. I think /lib/modules/x.y.z/net would be appropriate, where x.y.z is your current kernel version, e.g. 2.2.10 or 2.0.38.

    Altarnatively you can set the MODPATH environment variable to point to the directory where your module is located. See "man insmod".


    ANSWER: How to prevent remote logins as root

    Sun, 26 Sep 1999 14:17:48 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    Erik,

    I read your question in issue 46 of the Linux Gazette.

    To deny remote logins as root, add the following to the /etc/login.acess file:

    -:root:ALL EXCEPT LOCAL
    

    This means you can only login as root from a local console.

    But if I where you I would disable telnet entirely and use ssh (secure shell). You can disable telnet by adding a "#" in front of the "telnet" line in /etc/inetd.conf.

    If you are not running a server, I would disable inetd entirely. To do this, comment out the lines that start inetd in the start-up scripts. For Debian this is /etc/init.d/netbase, for Slackware the /etc/rc?.d scripts ("?" is your runlevel, look at /etc/inittab for the default runlevel). I don't know about Red Hat, but you can do a "grep inetd /etc/init.d/*" to find it there.

    Ian Carr-de Avelon <ian@emit.pl> says:

    From: Erik Fleischer <ferik@iname.com>

    For security reasons, I would like to make it impossible for anyone logging in remotely (via telnet etc.) to log in as root, but so far haven't been able to figure out how to do that. Any suggestions?

    This is an easy one, at least under Slackware; other distributions may differ. The file /etc/securetty has the terminals root can use. It looks something like:

    tty1
    tty2
    tty3
    tty4
    tty5
    tty6
    ttyS0
    ttyS1
    ttyp0
    ttyp1
    

    The tty(number) entries are what you use normally with the PC video card and keyboard. ttyS(number) entries are serial lines, so for example if you connect to your Linux box via a modem. ttyp(number) entries are "pseudo terminals" which you get if you come in via telnet. Delete all the ttyp entries and you can't telnet in as root.

    Yours
    Ian

    [Jeremy Johnstone < wizdem25@hotmail.com> and Stephen Crane <scrane@flexicom.com> also sent in the same suggestion. -Ed.]

    Jonathan Marsden <Jonathan@XC.Org> adds:

    You don't say what sort of login you have in mind: telnet? FTP? SSH? rlogin? I'll try to deal with all of those!

    (1) Set the file /etc/securetty to contain only the local console device(s). This is actually what is done in most or all well known Linux installations by default. It will prevent root login on telnet connections (or dialin lines, or any tty except the ones listed!).

    (2) Make sure root is included in the file /etc/ftpusers. Again this is done by default on most or all curent Linux distributions. This file lists all users who will be denied FTP login (one user per line), even if they use the "correct" password for that user.

    (3) In /etc/ssh/sshd_config (may be /etc/sshd_config on some distributions), set PermitRootLogin no. This prevents users logging in as root using SSH.

    (4) Disable rlogin by commenting it out of /etc/inetd.conf, where it is referred to as the 'login' service -- in other words, put a # sign before the line that starts with the word login, and then do kill -HUP `cat /var/run/inetd.pid` to tell inetd of the change.

    You will also need to keep current with security updates for your distribution, avoid running unnecessary services, and generally be aware of network security issues, if your computer is connected to the Internet; reading the Linux Security HOWTO and the more comprehensive "Linux administrator's Security Guide" at

    is also worthwhile to learn more about keeping your Linux systems secure.


    ANSWER: Re: reply to Linux on a laptop

    Sat, 2 Oct 1999 16:42:12 -0600 (MDT)
    From: Michal Jaegermann <michal@ellpspace.math.ualberta.ca>

    Russ Johnson wrote replying to a plea for help from a new Linux user with ATI rage LT PRO in a new laptop:

    You bet there's a solution. It's not perfect (yet) but it works well until XFree86 gets a new server out there. The solution is to use the Frame Buffer server. Details are here: www.0wned.org/~cain/ragefury.htm Other than that, the only solution available is to purchase a commercial X server.

    The answer is correct in this that this is a solution but this is not the only one nor the best. A few months ago I found myself in a similar situation installing Linux for somebody with a Gericom (a German company) laptop. Looking around on Internet I found fairly quickly (don't ask me how as I do not remeber that now, but it was fairly easy :-) the following web page:
    www.fachschaften.uni-bielefeld.de/physik/leute/marc/X/

    Among other things one can find there binaries of an X server supporting LT PRO which works very well. The card is similar to other ATI Rage cards but different enough to require a special treatment.

    You may also want to consult ruff.cs.jmu.edu/~beetle/ragefury.htm.

    I do not know if LT PRO support found its way in the recent XFree86 releases; pretty likely.


    ANSWER: Shell programming

    Sun, 26 Sep 1999 14:23:41 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    For starters, the bash(1) manual (type "man bash" at the command prompt) gives a detailed if somewhat cryptic listing of all the shell language features.

    I'd recommend reading a lot of other peoples' shell scripts. For instance, look at the system startup scripts in /etc/init.d, or (if /etc/init.d doesn't exist) in /etc/rc2.d.


    ANSWER: Internet connection problem

    Sun, 26 Sep 1999 14:35:48 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    Rakesh,

    First you need to know what authentification method your ISP uses. This can be PAP or CHAP or just a plain-text password.

    Then you need to tell kppp to use that authentification method. I'm not familiar with kppp, so look at the documentation. :-)

    If kppp doesn't have options to configure PAP or CHAP, you'll have to create a file called /etc/ppp/pap-secrets or /etc/ppp/chap-secrets yourself.

    These files should contain a line in the following format

    # client       server      secret      IP addresses
    rsmith         *           foobar
    

    First comes your login name, then a *, then your password. Lines beginning with "#" are comments.

    For more information read the pppd man-page (type "man pppd" at the prompt).


    ANSWER: Run-time error on cplusplus programme

    Sun, 26 Sep 1999 14:45:31 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    I think you should ask this question on the cygwin mailing list: cygwin@sourceware.cygnus.com

    There is als an archive of the mailing lists at http://www.delorie.com/archives

    For more information, check the homepage: http://sourceware.cygnus.com/cygwin/


    ANSWER: Making Linux talk to an NT network

    Sun, 26 Sep 1999 15:08:26 +0200 (CEST)
    From: Roland <rsmith@xs4all.nl>

    It looks to me that you want to use Linux as a client, not as a server, right?

    In that case you should use the smbfs utilities. You'll find them at http://samba.SerNet.DE/linux-lan/


    ANSWER: Preventing unwanted telnet access

    Sun, 26 Sep 1999 09:41:10 -0400 (EDT)
    From: Robert Tennent <rdt@cs.queensu.ca>

    Rick Smith asked for a way to prevent unwanted telnet access. I recommend a package called portsentry which automatically detects port scans and multiple failed telnet attempts. It denies access and doesn't return any IP packets to that host. It's free for non-commercial use. Available from

    http://www.psionic.com/abacus/portsentry/

    Bob T.


    ANSWER: Maximal mount reached; check forced

    Mon, 27 Sep 1999 00:35:03 -0400
    From: Ted <tytso@mit.edu>

    From: Jim Dennis

    We call that "losing the lottery." It always seems to happen when you're in a hurry to get the system back up and running.

    Yup. Note that even once we have journalling support in ext2, you will want to occasionally force an fsck over the filesystem just to make sure there haven't been any errors caused by memory errors, disk errors, cosmic rays, etc.

    If you need your laptop to reboot quickly just before a demo (and your laptop doesn't have a hiberate feature or some such), something you can do is to sync your disks, make sure your system is quiscient (i.e., nothing is running), and then force a power cycle and let your system reboot. Your system will then fsck all of your disks, and you can then shutdown your system, confident that the dreaded "maximal mount count" message won't appear during that critical demo.

    If you want to live dangerously you can change the the maximal mount count value on a filesystem using the 'tune2fs' command's -c option. You can also manually set the mount value using the -C (upper case) option. You can see the current values using a command like:
    tune2fs -l /dev/hda1
    

    If you know that your system is fairly reliable --- you've been running it for a while and you're not seeing wierd failures due to cheasy cheap memory or overly long IDE or SCSI cables, etc. it's actually not so dangerous to set a longer maximal count time.

    One approach if your system is constantly getting shutdown and restarted is to set the filesystem so it uses the time the filesystem was last checked as a criteria instead of a maximal count. For example:

    tune2fs -c 100 -i 3m /dev/hda1
    

    This will cause the filesystem to be checked after 100 mounts, or 3 months, whichever comes first.

    (It should be safe to change some values when you have a filesystem mounted read-only; though it might be worth asking an expert, so I've copied Ted T'so and Remy Card on this message).

    Yes, it's safe these values if the filesystem is mounted read-only. If you're ***sure*** that the filesystem is quiscent, and nothing is changing on the filesystem, you can even get away with changing it while the filesystem is mounted read-write. It's not something I'd really recommend, but if you know what you're doing, you can get away from it. It really depends on how much you working without a safety net.

    As far as I know there is no way in which this volume label is currently used. It seems to be a wholly optional feature; I guess we can use these to keep track of our removable media or something.

    You can use the volume label in your /etc/fstab if you like: For example:

    LABEL=temp              /tmp                    ext2    defaults        1 2
    

    or

    UUID=3a30d6b4-08a5-11d3-91c3-e1fc5550af17  /usr ext2    defaults        1 2
    

    The advantage of doing this is that the filesystems are specified in a device independent way. So for example, if your SCSI chain gets reordered, the filesystems will get mounted corrected even though the device names may have changed.

    - Ted


    ANSWER: Riva TNT 2

    Mon, 27 Sep 1999 18:01:07 +0200
    From: Peter "Blacky" Van Rompaey <peter.van.rompaey@xylos.com>

    NVidia has released its own drivers for Riva TNT / TNT 2 under XFree86

    Check them out at:

    www.nvidia.com/Products.nsf/htmlmedia/software_drivers.html


    ANSWER: Netscape and Java

    Fri, 24 Sep 1999 19:59:51 -0500
    From: Aaron Douglass Miller <amiller3@nd.edu>

    This fix for Netscape distributed with RH6 appears at http://www.linux-now.com

    I do not take credit for this, it is not my work...

    Edit the file:  /etc/X11/fs/config
      change this:
        catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled,
                    /usr/X11R6/lib/X11/fonts/75dpi:unscaled,
                    /usr/X11R6/lib/X11/fonts/100dpi:unscaled,
                    /usr/X11R6/lib/X11/fonts/misc,
                    /usr/X11R6/lib/X11/fonts/Type1,
                    /usr/X11R6/lib/X11/fonts/Speedo
    
    to this:
        catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled,
                    /usr/X11R6/lib/X11/fonts/75dpi:unscaled,
                    /usr/X11R6/lib/X11/fonts/100dpi:unscaled,
                    /usr/X11R6/lib/X11/fonts/misc,
                    /usr/X11R6/lib/X11/fonts/Type1,
                    /usr/X11R6/lib/X11/fonts/Speedo,
                    /usr/X11R6/lib/X11/fonts/75dpi
    
    And then restart the font server with this command:
      /etc/rc.d/init.d/xfs restart
    

    Tue, 28 Sep 1999 21:18:37 -0500
    From: Larry Settle <lsettle@mail.com>

    This is a reply to: mjaganna@us.oracle.com

    He wrote on Mon, 20 Sept, 1999:

    I am running Netscape Comm 4.51 on Red Hat Linux 6.0. It crashes invariably if I load a site with any Java applet etc. Is there something I am missing or is this a known bug?

    Mahesh

    I had the same problem on Red Hat 6.0. I fixed Netscape Comm 4.6, but 4.5.1 was broken in the same way.

    Use this URL to Netscape's knowledge base: help.netscape.com/kb/consumer/990807-8.html

    In case you can't reach it:
    execute: chkfontpath --list
    If "/usr/X11R6/lib/X11/fonts/75dpi" is not listed
    execute: chkfontpath --add /usr/X11R6/lib/X11/fonts/75dpi

    Note that "/usr/X11R6/lib/X11/fonts/75dpi:unscaled" will be listed. You still need the one without the ":unscaled" suffix.

    Larry Settle


    Mon, 11 Oct 1999 23:34:43 -1000
    From: Kevin Brammer <kncbram@hawaii.rr.com>

    Yes, it's a known bug with Redhat 6.0. The fix is simple, type this (as root) in a console window:

    chkfontpath --add /usr/X11R6/lib/X11/fonts/75dpi
    

    For more bugs/fixes/issues with Redhat 6.0, check out: www.redhat.com/cgi-bin/support?faq


    Wed, 27 Oct 1999 07:50:36 +1300 (NZDT)
    From: Tobor <sc.wong@ieee.org>

    It's a well known bug and Netscape is one of the worst piece of software on Linux IMHO. Do a search on www.searchlinux.com or dejanews and you'll see how many hate postings there're on Linux newsgroups.

    Anyway, there's one way to stop Netscape crashing as often. Do you download netscape from their ftp server or from your distro? If you download from netscape, don't use the link from their http pages. They only have a links to binaries that's linked to libc5 which crashes very often on my redhat 6.1 box. On their ftp server, there's another set of binaries linked to glibc2.0 which is much more stable. Try them out and see which ones are better.

    PS. I always turn java off.


    ANSWER: Installing Linux on large drives

    Wed, 29 Sep 1999 19:02:52 -0400
    From: Noah White <noah@silverbacktech.com>

    To avoid possible BIOS limitations just make a /boot partition which ends before cylinder 1023.

    -Noah


    ANSWER: Reading Linux partitions from NT/95

    Tue, 05 Oct 1999 08:49:49 +0200
    From: Gwenael Lambrouin <glambrouin@csi.com> uranus.it.swin.edu.au/~jn/linux/explore2fs.htm


    ANSWER: COBOL compiler

    Wed, 06 Oct 1999 12:36:17 -0400
    From: Matthew Dean <dean@deskware.com>

    Regarding the posting "Re: Help wanted for a (Cheap) COBOL compiler for Linux", we offer a product called CobolScript for US$49.95. CobolScript=99 is a COBOL-like interpreted language with specialized syntax for file processing, CGI programming, and internetworking. CobolScript also has a wide range of advanced math and business functions available to facilitate quick and easy calculating.

    See www.cobolscript.com for more information.


    ANSWER: Mounting a zip disk

    Mon, 11 Oct 1999 13:38:46 +1000 (EST)
    From: Richard Wraith <rgw@trinity.unimelb.edu.au>

    Whoops, a small error in the address. This will work!

    This is an email I sent to our local linux users group after a somewhat tricky setup of a zip drive. You might want to add some of the info here to the atapi zip drive entry in 2cent tips and tricks.

    I have an ATAPI zip on the second IDE interface as the slave device - ie /dev/hdd.

    Oh, and thanks for the tips and tricks article - it was a great help for most of the job.

    Date: Mon, 11 Oct 1999 13:25:21 +1000 (EST)
    From: Richard Wraith 
    To: Linux Users of Victoria <luv@luv.asn.au>
    Subject: Re: Mounting a zip disk
    

    Thanks to all those who replied, particulary Derek Clarkson and George Georgakis - the answer was in the fine detail.

    The important points to note that aren't so clear from the HOW-TO:

    1) Compile in IDE Floppy support in the kernel - there is no need for scsi emulation unless you want auto-eject support. Also remember to compile in support for the filesystems you wish to have on your zip disks.

    2) Zip drives actually appear to have two mount points depending on the history of the zip disk. If the disk has previously been password protected by Iomega's zip tools it needs to be mounted at /dev/hdd1 (or what ever the /dev/hd location for your system). Whereas if the drive was never password protected it gets mounted at /dev/hdd4. This is where I think I got caught.

    3) vfat is the filesystem type, but msdos and auto will work fine as long as you get the mount point right.

    4) ext2 (ie Linux) formatted disks mount at /dev/hdd ie without the extra number - whether the disk has been password protected before or not.

    5) Formatting a disk from vfat to ext2 and back to vfat does not clear the previous password protection stuff - interesting huh!


    ANSWER: CDROM is not a block device

    Wed, 13 Oct 1999 16:34:27 +1000
    From: Edwin Rikken <Edwin.Rikken@si.han.nl>

    Hi Dave,

    You seem to have installed Linux with your Cd-Rom in working condition so it must be ok. Your cdrom has worked in Winxyz, I presume. My advise is first to check cabeling and jumper setting. Let's assume you have one hard disk and one cdrom. /dev/hda will be the device for communicating with your harddisk ( I am leaving out the numbers to describe which partition, but you get my drift). Now the question is where did you put your cdrom?

    1 In case of: slave on primary IDE controler: it should be /dev/hdb, if so did you jumper the cdrom accordingly?. 2 In case of master on secondary IDE : it should be /dev/hdc ( you think it is ) you should check the jumpersetting. In the sloppy DOS/Win world it will work fine with good or bad jumpersettings. Not so in Linux. You must be sure that you jumpered it master. If you did there remains one thing to do ( it worked fine for me) at boot time, type at the LILO boot: hdc=cdrom The kernel will display at boottime: hdc=cdrom? which means it will accept your instruction but does not grok the message. Your cdrom should work after that. This is a so called boot parameter and can be inserted in de LILO configuration file. 3. In case of slave on 2nd IDE, check jumpersettings. Cdrom should work in /dev/hdd and you should at boottime instruct the kernel that hdd=cdrom.

    The reason, I think, is that it is logical ( in the Vulcan sense:-) ) to put the cdrom in the next empty spot: /dev/hdb ( slave on first IDE). It will probe at boottime the slave on primary IDE and if it detects zilch it 'knows' there will be no other devices. So no cdrom on master on IDE2 will be detected. There for if you instruct the kernel with: hdc=cdrom it will respond with, ok I will accept what you said but I think that's stupid because it's not loggical...:-).

    Good luck...

    PS. if this does not work you should check your fstab file in /etc (?)...

    groetjes pari@si.han.nl ( Paul).


    ANSWER: Compiling IRC

    Mon, 18 Oct 1999 23:07:57 +0200
    From: Scott Swafford <320053139930-0001@t-online.de>

    Manuel & everybody,
        I noticed your article in the Gazette about problems compiling IRC, and while I haven't done it in Linux, I compiled IRC and run it on my site (http://www.pfpconsortium.org). I did it under Solaris 2.7 (Sparc HW), so I'm not sure how 'portable' will be my help, but I'm willing to try.
        Could you please send me information on what error messages you were getting when trying to compile, your compiler (gcc, cc, etc) and platform ? I noticed a few tweaks during the configuration process, and a few library files needed during the compile, but other than that it was straight forward (the hard part was getting the executable to run with the right arguments, and setting up the ircd.conf file correctly, if I remember correctly).
        Anyway, send your details and I'll try to take a look.
    Cheers,
    Scott Swafford


    ANSWER: Chat server

    Sun, 24 Oct 1999 15:11:45 -0400
    From: Chris Campbell <campbelc@infi.net>

    There are several well-used channels. On the Undernet IRC Network, you can get on via us.undernet.org or eu.undernet.org On the EfNet Network, try irc.emory.edu On DalNet try irc.dal.net Then, when connected, go to the #Linux channel. Chris


    ANSWER: Imagemap

    Wed, 27 Oct 1999 11:07:38 -0600 (MDT)
    From: Michael J. Hammel <mjhammel@graphics-muse.org>
    In the need to define hotspots on some images in HTML documents, I found a total lack of programs for Linux that enable you to accomplish this task. Does somebody know what I'm searching for?

    There are a couple of choices. First, there is the ImageMap plug-in for the Gimp. It wil allow you to define hot spot regions and outputs the HTML tags for the image map. registry.gimp.org

    Another option is MapEdit, from Thomas Boutell. It does pretty much the same thing the first option does, but with a different interface. /www.boutell.com/mapedit/

    Hope that helps.


    ANSWER: Printing lines of black

    Wed, 27 Oct 1999 11:07:38 -0600 (MDT)
    From: Michael <michael@cimmj.freeserve.co.uk>

    I have a 690c and encountered the same problem (printing from KDevelop using enscript), that following the text a solid black line was printed. After much trial and error I found using the cdj550 driver solved the problem and still allowed me to print in colour.

    In /usr/local/bin/psjetfilter:
    /usr/bin/gs -q -dSAFER -dNOPAUSE -sDEVICE=cdj550 -sOutputFile=- -


    ANSWER: FAQ and printing...

    Thu, 30 Sep 1999 16:19:30 +1000
    From: Mark Kuchel <m.kuchel@ugrad.unimelb.edu.au>
    Subject: FAQ and printing...

    In the FAQ you say that PDF is only visible by a custom viewer. Actually gv and xpdf both can display PDF files. Also, if you do the Netscape "Print to file....", and get a postscript file, using ps2pdf in the ghost(script/view?) package then you can get PDF files.


    This page written and maintained by the Editor of the Linux Gazette, gazette@ssc.com
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Bob Young Speaks at LXNY

    By Stephen Adler



    Look Ma! That Bengal Tiger is Charging Right at Us with its Claws Ready to Strike!

    New York City under what remains of hurricane Dennis

    [Click on the thumbnail images for full-sized pictures.]

    The multicast packets were going out the tunnel but not in. Hmmm.... I'll kill mrouted, and change the TTL threshold on the tunnel to 1 in /etc/mrouted.conf. Restart mrouted, kill it with a USR1, and check the routing tables it just dumped to /var/tmp. Hmmm.... Again, packets go out, but not in. Ok, lets ftp the very latest version of mrouted (beta version number increments by one), recompile, kill the running version of mrouted and startup the new one. Ah, now I have mrinfo with this package to help debug the multicast tunnel I've setup. New version of mrouted but same problem; packets go out, but they don't come in. I've managed to fully configure a one-way multicast tunnel.

    A nice example of the breadth of architectural styles found in the NYC sky scape.
    It was September 7th, 1999, and I was counting down the days to a crazy event which will occur in less than one month. Open Source/Open Science 1999. My life has been turned totally upside down trying to get this conference together, and setting up this multicast tunnel into the Internet is a vital step. The conference will be broadcast onto the Internet come hell or high water! But, another very important event was happening that day. Bob Young was taking time away from his schedule to go back to an old haunt of his, LXNY, and spend about 4 hours answering questions and later going to dinner with his old user group. So I spent the day fighting the multicast tunnel dragon, a fiery beast which spewed out multicast packets with deadly force. I defended and attacked with my source code armor and sword, deflecting those deadly packet attacks with emacs, GNU make, gcc and lots of greps and mores. The meeting started at 6:30pm and this time I was damned if I wasn't going to catch the 4:05pm train into the city. 3:30pm rolled around and the multicast dragon hurled one last fierce round of flaming packets at me. Emacs, man, grep, make and gcc couldn't deflect them and baaam! I got a direct hit. Ouch. With that, I laid down my sword, grabbed my notebook and car keys, and ran to catch that 4:05 to Penn Station. I'll take up the battle with the multicast dragon tomorrow. For now, I had a very important users' meeting to attend. (Important on a historical level...)

    Down below on the street, a newspaper-magazine vendor tends to his inventory.
    I got to the Ronkonkoma station at 4pm on the nose, with good time to walk (not run, as usually) to the train. As I got close to the tracks, I could hear a muffled announcement which sounded like they had just canceled the train into the city. I got closer and my intuition was right. No LIRR to NYC because of a gas leak somewhere between here and Penn. Back to the car, and on down the LIE where I took a rather circuitous route to avoid possible parking lot traffic conditions ahead. All in all, the trip into the city was uneventful. The hour and a half drive gave me a chance to reflect on the state of the Open Source movement that day.

    Two standard issue NYC hotdog and hot prezzel stands.
    The big event in the Open Source world was, of course, Red Hat's IPO, which took place about a month ago. This capitalist rite of passage catapulted some senior members of Red Hat into billionaire status. At least on paper, or more virtually, on the text composed by my web browser. Red Hat's market cap was sitting at over 5 Billion that day. One share of Open Source/Freedom software (i.e. NASDAQ:RHAT) started trading that day at about $85 and by the time I headed into the city it was up to over $90. All the closed source marketing hype and FUD coming out of other software OS companies cannot refute the hard cold facts of market forces. The old phrase "free software, you get what you pay for..." rung especially hollow that afternoon. I was totally dumbfounded by the new heights shares in NASDAQ:RHAT was reaching. I remember reading one of those e-trade news blips that NASDAQ:RHAT shares had reached "nose bleed territory." This was when a share in RHAT fetched a cool $50 the closing day of the IPO. Now that it was trading at over $90 a share, how would those financial reporters describe this territory? "Brain edema territory"? Or maybe "total outer space vacuum territory." Forget the nose bleeds - your whole body explodes from the inside out!

    I did say that NY was under the clouds of Hurricane Dennis, but I thought I would liven up the view a bit with this nice shot of a Midtown Manhattan skyline taken the following weekend after Bob's talk. The shot was taken from Central Park looking south.

    I'm sure millions were spent on this piece of art, yet it seems to get in the way of pedestrian traffic.
    Take a step back and think of this IPO phenomena. A company with something on the order of 10^2 employees packages and distributes free software, offers services on getting it up and running on your PC, offers only about 10% of its shares to the public and is now worth over 5 billion dollars! Red Hat's driving force, like many other companies', are its employees. Therefore one can put a market value on each one of those employees at about 10 million dollars a head! Think about this: the free markets of western civilization value an Open Source employee at 10,000,000. That's a lot of zeros.

    The unbelievability of the numbers behind Red Hat's IPO dominated my thoughts as I swerved over to the Southern State, down the Southern State, then onto the Cross Island, back onto the LIE and finally handing over my $3.50 to the toll booth guy to pay for my passage under the East River at Midtown.

    IBM's red sculpture which sits at the base of the IBM building on 57th street.
    I was driving around Midtown at 5:30pm, this being record time for me. So I decided to find a parking lot close to the IBM building where the LXNY meeting was to take place. The first parking lot I found was going to charge me $35 bucks to park. I backed right out of that parking lot. I drove around some more and found another parking lot which had a special: $25 for 3 hours. "How much for 4 hours?", "forty dolares" replies the attendant is broken English. "40 bucks?!" I backed right out of that parking lot as well! After spending 45 minutes driving around in circles through Midtown Manhattan, I finally gave in. The third parking lot was a mere 32 bucks, (or $40 after tax I found out when I went back to pick up my car. Sigh...)

    The rain from hurricane Dennis was now coming down over Manhattan rather hard. I got only slightly wet this time, as I dashed over to the IBM building. Knowing that the remains of Dennis were on their way to NYC, I was wise to bring my umbrella along.

    New Yorker's anxiously awaiting Bob's arrival with baited breath? I think not. Most of them have no idea who Bob is and are just taking cover from the rain in front of the main entrance to the IBM building on 57th street.
    There was a small crowd of people waiting at the main entrance to the IBM building for the rain to stop. At first I thought they were all there, waiting with baited breath to glimpse Bob Young for the first time. I imagined this grand long black limousine pulling up to the building, the driver quickly running over to the passenger door to open it for Bob. I never saw such a limo drive up and I strongly suspect that Bob arrived in a yellow NYC taxi cab. As a matter of fact, I think only about 3 or 4 people, of the 20 or so standing out front, were waiting to get into the IBM building to attend the LXNY. The rest were taking cover from the rain under the entrance way into the IBM building. I had a chance to meet one of them, who introduced himself as a journalist. After some small talk we both headed in, got our stick-on badges, and headed up to the 6th floor.

    LXNY folk gather at the beginning of the meeting, waiting for Bob to show.

    The meeting room was a large one. There was ample room to fit at least 100 people. When the journalist and I arrived, there were about 20 people there. I got to work handing out fliers to my Open Source/Open Science conference and also took some pictures.

    Brian (I think) and Jim of VA Linux posing for me as Jim zips up his VAIO in one of those glad zip lock bags. Good way to water proof your VAIO.
    I saw some familiar faces from VA Linux show. There was one guy with a very nice VAIO note book running Quake. Jim Gleason of VA Linux was promoting his Linux demo day event. A day where a bunch of guys get together and play Quake all day on a bunch of VA Linux PC's attached to an OC-48 fire hose into the Internet. (What ever happened to sex, drugs and rock 'n roll?)

    I noticed the door to the meeting room close and went over to try get it to stay open, figuring people showing up may miss the meeting if the door is closed. As I was futsying around with the door, I turned to find Bob Young looking to get into the room. "Hi Bob" I said, "Welcome to New York." He cracked a smile, returned the greeting and went in to mingle with the rest of the gathering crowd. I followed him in.

    Brian and Ari of VA Linux. If you look closely, you'll notice that they are both soaked from the rain outside.
    By this time, there were closer to 40 people in attendance. Mike Smith, one of the co-organizers of LXNY, was there, writing information down on a large paper pad which sat on an easel about the various Open Source/Free Software related events going on about town. He was also waiting for his counter part, Jay Sulzberger, the other LXNY co-ordinator, to show up and start the meeting. Jay never showed. So Mike called on everyone to listen to a bunch of announcements he had, as the meeting formally got started.

    The Amiga Users Group (AUG) president, I assume, announcing the existence of the AUG.
    Mike started with "LUNY is meeting ... ", "The NYLUG is doing ...", "WWWAC is having a ....", "NYSIA panel discussion will be on ...", and on and on. Finally one guy sitting on the far left of the seating area, exclaimed, "What are all these user groups for?" in a rather grumbled note. "Why do you have so many? Shouldn't one be enough!". Jim Gleason took that question. He explained that when he was in San Francisco, there were so many user groups that one could find a meeting of some sort any day of the week. When he showed up to NY, the number of groups was small in comparison so that he figured he would start the New York LUG. With that, this man at the end of seating area said, "Well, we want to announce the Amiga Users Group! (AUG?)" And with that announcement, Manhattan just got one more user group narrowing the "user group count gap" between the east and the west coast.

    Mike Smith addressing the assemble group before Bob speaks.
    With the announcements finished, Bob got ready to address this particular user group. He haggled with the seated crowed about how he was going to structure his talk. He settled on giving some old LXNY stories and then take questions from the audience.

    Bob started by talking about the amount of travel he was doing lately in promoting Red Hat to private industry in the pre-IPO days. Those days were long and the travel extensive. The same presentations were made over and over to the point that he had a hard getting his mouth just to form the words during these presentations. Through this ordeal, he learned the truth of the equation "opportunity - sleep = trouble." He was also amused to realize that the corollary also held true, "trouble + sleep = opportunity." A neat equation of state in the world of sales. During his travel many mistakes were made and this was done under high pressure situations and little sleep when he and other Red Hat management types were pitching Red Hat to heavy weight investors. He recalled one time when they were scheduled to be in New York on such a day, and someone had scheduled a meeting with just one investor in Dallas the day before. He and his colleagues who were touring the country, did not want to go to Dallas, give their presentation in front of just one investor, and then have to fly that night to NY getting there at 2am, and then try to be fresh for a really big, high pressure pitch to a bunch of NY Wall Street types the next day. So they decided to play a trick on this poor Dallas investor. During their presentation, they were going to redo all their mistakes they made during all their other presentations to date. Bob continues, "I got up and gave my introduction, going through all the mistakes I could remember while I gave my talk, I then handed the floor over to Matthew Szulik, (the current president of the company.) Matthew started. 'We at Red Hat are committed to bringing and supporting the best software Open Source has to offer to the Amiga platform!'" Well, maybe you had to be there to get the irony of the story.

    Bob's waiting for Mike to finish up his announcements before taking the floor.
    Bob went on to talk about the origins of LXNY and his work in the Free Software world. Bob claimed that he was a sales guy through and through. "After the revolution" he said, "I'll be out there selling fuller brushes". (What ever they are...) Bob was very clear to cast himself as the "entrepreneur." He went into sales right out of college and has stuck with it since. He started in the computer industry in the leasing market. He would rent computers to companies who didn't want to pay loads of money to add computing power to their IT systems. The computer leasing industry was about $100 million strong back then and he ended up moving to New York City. He said he ended up in middle management for this leasing company and to him it was clear that the writing was on the wall. During his stint in NYC, he started a computer news letter. He was also active in one of the Unix user groups. Unigroup, I believe it was called. He heard complaints that the Unix user groups where shrinking in membership. "We announce the meetings, but fewer and fewer people show up", Bob recounted one user member's complaint. The solution for Bob was simple. You had to attract attention to these meetings. The newsletter he was working on needed a unique angle in order to attract the attention of the local computing community. Bob was up against some well oiled machines like Ziff Davies and IDG (Infoworld). These were well established media groups in the computer industry with huge budgets, staffs of reporters etc. Bob needed to find a niche which these other magazines didn't cover. This niche was Free Software.

    Bob Young and Mike Smith standing in front of the assembled group sometime during the Q and A part of Bob's talk. I can tell because Bob has taken off his jacket and tie at this point.

    Talking to the computer users at the time, it became clear to him that Free Software had something important to offer. He recalled how people would wax poetic about the wonders of Free Software. It was much more stable and reliable than its commercial counterparts. Thus Bob featured Free Software articles in his newsletter. He didn't tell the audience if his newsletter was a success or not, but Bob had found his niche.

    Another shot of Bob and Mike, but this time before the talk starts. Notice Bob hasn't taken off his jacket and tie yet.
    So Bob started down the Free Software path. This was sometime around 1992 or 1993. Being the entrepreneur he claimed to be, he started to research the Free Software market. He would talk to people about this concept of trying to make money from "free" software and the consistent answer was that no, you could not. Bob found this strange. Everyone he talked to about it raved about how good it was, and yet you could not make money from it? This struck an odd chord in his marketing and sales intuition. The whole idea of a free market economy is that you look for a need, and you work at fullfiling it. He was doing this to some extent by publishing a newsletter on free software. The next obvious step was to somehow make this free software available to those who wanted or needed it.

    Bob taking questions and giving answers.
    His research on free software and the ability to turn a profit from it led to a meeting with Richard Stallman. At that time, he joked that this free software stuff was somehow a Trojan Horse from Redmond, Washington. Once he met with Richard Stallman, he realized that the two formed the two extremes of a bipolar system in the software world. Bill Gates at one end in the closed source software world, and Richard Stallman, at the opposite end in the open/free software world.

    At that time, Linux was making its way into the free software world, and Bob saw an opportunity to exercise his entrepreneurial skills. In grand entrepreneurial style, he hooked up with Mark Ewing, and started up Red Hat, which he ran out of his wife's sewing room. Some time later, he and Marc moved to North Carolina where Red Hat is now stationed.

    Bob looking over his shoulder at someone, just after he got to the LXNY meeting.
    Bob was clear about one point in his venture into the free software world. He and Marc (and the rest of the free software industry) were up against Microsoft. And the only way one can take on a giant like MS is by not playing by its rules. Any company who tried to compete with MS using the closed source model was doomed to fail. And many did. Once MS decides it will take over some kind of application, be it a web browser, multi media player, compiler, or whatever, it will either buy the competition, or release its own version and thus kill the competition. How can you compete with the guy who owns the operating system you're writing software for? This left only a rather bleak choice for your software company to either being bought or broken by the OS giant.

    Yet another shot of Bob and Mike. Stop me if I'm getting too repetitive...
    Bob then made some comments related to this train of thought regarding Richard Stallman. People who compare Richard and Bob would conclude that it's these two who are on opposite ends of the spectrum. Bob wants to sell free software, or more accurately sell services in the free software market, while Richard's goal is to keep software free. (Remember, "free" as in "freedom", not free as in a "free lunch.") But the end result, either Bob trying to turn a profit by packaging a free OS and selling services for it, or Richard, keeping the code free, was the absolute necessity of keeping the source code free and open. Bob made it very clear. As soon as one starts to dress up a Linux distribution with closed source "enhancements", like a partitioner and boot loader applications, a window manager/desktop, or even the installation tools, your are starting to play right into the strengths of Microsoft and the closed source software school. And when you do, you lose! Therefore it is an absolute necessity to keep every bit of code you package and write free and open.

    Bob went on to describe how the railway monopolies of the beginning of the 20th century were broken. They were not broken by other companies building better trains or tracks, they were broken when the interstate highway system was built and truckers could deliver goods from door to door, rather than from region to region. In a similar fashion, the software industry will have to use Free Software to break the monopoly held by Microsoft which it enjoys now.

    Jay Sulzberger bring a gift of pastry snacks to the guest speaker.
    At some point during Bob's discussion on his analysis of Free Software, Jay Sulzberger came in. Jay, in typical Jay style, made a rather entertaining entry into the meeting. Jay was wearing a jacket and tie, but the tie's really not tied right, (on purpose,) along with some rather ragged shorts. He had with him some baked goods consisting of a cake, some eclairs and Mediterranean sweets. He exclaimed "You must always bring gifts to the rich!". Unfortunately I can't remember all that Jay said at that moment. Be it that it was boisterous, in good humor, and we all had a good laugh along with Bob.

    Bob had some more points about Free Software which need mentioning. He talked about how the market for overnight package delivery changed. When Federal Express entered the market, their goal was to reduce the cost of delivering a package overnight from $200 down to $10. "What happens when you do so?" You change the way people use overnight delivery by expanding its use tremendously. And we see this today, with everyone and his uncle sending or receiving packages overnight. Now with e-commerce, the overnight delivery volume is just going to get bigger. Bob segued into this thought, what will happen in the OS market if you change cost of an operating system to $0? "You will change the way people use it," implying a great expansion in the use of Free OS.

    I thought I would break up the monotony of all these pictures of Bob with this picture of a section of a Maya stone relief. The picture was taken the following weekend while I roamed the Metropolitan Museum. It's also a nice contrast to all this talk about high tech and Free Software.
    The final major point of Bob's introductory talk was his thoughts on where the Free OS market was going. Being a businessman, he had to keep in mind the bigger picture of whatever business he's in. For example, when he was in the computer rental market, he knew when his company went from a startup to a major player. This market grosses about $100,000,000 a year. If your company grosses $10,000,000 then you can consider yourself a mature company. There is a term for this (which I can't remember now) which means that you have gone from a startup to a major player in the market, thus your quarterly revenue increases will start to taper off. That is, you grow by a factor of 2 a year and once you become a "mature" player in the market, you will only grow by a few percent a year since you have in effect saturated the market. Bob has been trying to apply this analysis to Red Hat in the Free Software market. How big does Red Hat have to get in terms of gross income before it can consider itself a "mature" company in the field? The answer to this date is that he has no idea. No one knows. From my own personal perspective, one can look at Microsoft's market capitalization. Right now it stands at about $500,000,000,000: that's five hundred billion dollars. And Red Hat stands at a puny $5 billion, 1% on the scale of Microsoft. This means Red Hat has another two orders of magnitude to grow before it can be considered a "mature" player in this new market of Free Software. But then, if Red Hat and other new members of the Free Software market are going to change the way people use software, as in the example of how Federal Express changed the way people use overnight delivery, then you have to factor in several orders of magnitude in the expansion of Free software on top of Microsoft's market cap. So if you consider Microsoft a "mature" company at $500 Billion, and you consider say 1 order of magnitude increase in the use of Free Software because of its $0 cost to install and distribute, then Red Hat may look at becoming a "mature" company when it hits a market cap of $5 trillion? (Yow, these numbers are so large it's scary.) But then, we are talking about software which is in every PC (not just Intel,) in every network appliance (refrigerators, toasters, fuller brushes...) in every country around the world, tied together by the Internet, so $5 trillion just may be the right scale. Paraphrasing Linus Torvalds, "This is total world domination."

    Back to Bob. Here his is photographed while being only slightly mobbed by LXNY'ers trying to meet the man for the first time.

    At some point the meeting turned form Bob talking about his experiences and analyses of the Free Software market to a question and answer period. There were lots of questions which varied from asking about his new book, "Under the Radar," to what he thought about making money off others people's software. (He took his coat and tie off to answer that question which he started by saying, "We are standing on the shoulders of giants..."). One thing that I noticed during the question and answer session was the urgency of those who wanted to ask their questions. As time went on, more and more people were raising their hands trying to get a question in. There was an active dialog going on between Bob and the LXNY users group. The question I wanted to get in, but couldn't, was what would be Bob's advice to someone who wanted to enter this new Free Software market. I'll pop the question to him the next time I see him. He did tell me that he was going to attend my Open Source/Open Science extravaganza at BNL, so maybe I'll corner him then...

    LXNY'ers gather at Kaplan's deli after Bob's talk.
    The night was getting on and I could tell Bob was getting tired from all the questions. The LXNY meeting is a rather long one. It goes from 6:30pm to 9:00pm with dinner at Kaplan's Deli afterwards. It was about 8 or 8:15 and Bob wanted to know how much longer he should take questions. (I think he was hinting that maybe it shouldn't be too much longer) "Another 1/2 hour would be great!" Mike, the co-organizer of LXNY tells Bob. So Bob continued the question and answer period for at least that long.

    The final question finally rolled around. Something about really bad support from Dell, which went on for about 5 minutes. Bob's reply was the right one, "Send me an e-mail of your complaint and I'll forward it on to the right person." With that everyone got up and the "after the talk buzz" started. Bob was surrounded for the next 20 minutes by people trying to meet him and get another question in. I walked around taking photos. After a while, Jay in a very loud voice told every one to get out since the building management closes the room at 9pm.

    Mike, Bob and Jay, sharing a NY Deli moment.
    The next hour was spent at Kaplan's Deli. Bob came right along with the group and sat between Mike and Jay eating some deli delight. I was rather surprised that he would take the time to go with the LXNY bunch over to the deli for dinner. He must have many demands on his time nowadays. It was way too late for me, but I wanted to get some photos of the group in Kaplan's. I had my extra lean corned beef, about 3 glasses of water, said my goodbyes to Jay, and took off for home.

    If you have read my other articles, you know my routine by now. I hit the LIE east, and start counting exits until I reach exit 68. This time was no different. And again, as I drove down the LIE, (I can almost drive this freeway blindfolded) my mind wandered off into Free Software land.

    The event that I just attended I considered to be at the historical level. And unfortunately, comparisons between Bob Young and Bill Gates kept popping into my head. When Microsoft went public, did Bill gather with his old Altair users group to talk about the wonders of DOS? When was the last time Bill showed up to a users group meeting to basically shoot the sh*t with his friends? When will Bob be able to do what he did tonight again? As time goes on, and the free software market expands, Bob's time is going to be more and more in demand and events like this one will just not occur. It's a sad thought but a realistic one, I'm afraid.

    Back to primitive imagery. Why not! I bet there is a law of human social interaction and economic forces which states that these forces are invariant in time. If not, I'll be glad to publish an article on such a law. This picture is of a gold mask found excavated in Central America.
    Bob, along with Marc Ewring, have started down a quite adventuresome path. Bob clearly has proven that he understands the world of Free/Open Source Software. He and Marc have taken Red Hat to where other Linux distribution and support companies are headed. Build an "ecosystem" of Free Software and an industry will grow from it. The contribution which Red Hat has made to GNOME is what I assume is just the first step. Red Hat has founded RHAD which will be put to use in developing further Open Source projects which my guess is to expand on the "lets make a fertile ecosystem" model, so that others can start writing application software to run on it and Red Hat will make money supporting systems which use it.

    This gold mask is of Peruvian origin.
    One needs to keep in mind that the "Open Source/Free Software," ecosystem has one ace up its sleeve; this ace being the Internet. I keep harping about this fact, so please forgive my repetitiveness. This Free Software/Open Source phenomena was born out of the global connectivity of the Internet. The Open Source nature of Free Software is a byproduct of the way software developers work together though the Internet. Another way of writing this is to say that because of the inherent nature of how developers collaborate from far distances, over the Internet, with a goal of sculpting a software package like Apache, GNOME, the Linux kernel etc, one needs to resort to the "Open Source/Free Software" model in order to make this collaborative system work. It's its own culture and a pure one at that, meaning that all of the software which is found in the Open Source/Free Software domain, was written in this Internet collaborative model right from the get-go.

    Ugly fellow from the south pacific.
    So how does this fact affect the closed source software industry? When software is written in the closed source domain, it will be very difficult to transfer it to the Open Source domain which is favored by the Internet. The simple fact that there is monetary interested invested in a closed source software project will keep it from being opened. So this sets a very polarized stage in the software industry. A company which starts out by paying to develop software in the close source business model, which it sells and is its main source of revenue, will have a very large mental barrier to overcome in order to adopt an Open Source business model. One could call them pre-Internet companies. These companies now have a dilemma brought about by the Internet. The very nature of collaborative work on the Internet has given birth to this Open Source development model. The Internet and its connectivity will dominate our future at all levels of our social fabric. From the way we do business to the way we meet our future mates. Because of this Internet connectivity culture which is forming around us, these closed source companies will either be forced into the Open Source model or go bankrupt staying in their close source domain. One can view this as a Darwinian economic jungle where the principle of "the survival of the fittest," (or free'est?) applies. This chain of thought then leads to the other side of the spectrum. The only way for a company to survive in the Internet domain, is to start out embracing the Open Source model right from the beginning. The Red Hat's, Caldera's, Turbo Linux'es, Linux Care's, and other post-Internet companies are the ones who don't have to face the hurdle of taking a large invested close source software product and turning it over to the Open Source domain. They all started out on the Open Source side of the Internet software development model and they will grow right along with the Internet. Because of the power of the Internet, or better said, the intellectual power that the connectivity of the Internet will harness from a global population, and the fact that it favors (even gave birth to) the Free Software model, then you should ask your self this question. One what side of the Open Source/Free Software - Close Source fence would you like to be on? Let me give you a hint, NASDAQ:RHAT has now reached $135 a share (just shy of a 1000% gain since its IPO,) and has been consistently selling for over $100 a share since then. Ugly as it may be, those Free Market/Darwinian forces are telling us something....

    I leave you with a shot of the Rosetta Stone. An inscription is repeated 3 times in 2 different languages, Egyptian and Greek. The inscriptions are hieroglyphs, demotic (another form of Egyptian writing) and Greek. From what was etched in this stone, 19th century scholars were able to begin deciphering the Egyptian hieroglyphs. One could argue that the necessity of open standards pre-exist our Internet times by several millennia.


    Copyright © 1999, Stephen Adler
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Soundings: Explorations In Linux Sound

    By Larry Ayers


    Contents:
    Linux Sound Drivers  |  A Basic Sound Toolkit
    Sound Utilities For Musicians  |  Sound Visualization With Extace

    Linux Sound Drivers

    Lately I've been investigating some of the numerous sound processing and display tools available for Linux. This is an extremely active area of Linux software development and covering it fully would be a book-length project; in this series of articles I'll limit myself to software packages which I've found to be particularly useful and impressive.

    Soundcard support for Linux is in something of a fragmented state these days. The drivers supplied with the Linux kernel source (the OSS drivers) are functional and work well with many sound cards; they are being maintained, but the original developers have gone on to form a company, 4Front Technologies, which supplies enhanced drivers (including drivers for cards which Linux doesn't support) to Linux users willing to pay for them. 4Front's drivers can be easier to set up than the native Linux drivers, and 4Front's developers attempt to keep abreast of new cards as they appear.

    Devotees of open-source software prefer open-source drivers; frustrated by the lack of progress in free Linux sound-card support, the ALSA (Advanced Linux Sound Architecture) project appeared on the scene. Rather than attempting to extend the current free Linux drivers, ALSA programmers started from scratch. Other developers began to contribute and the result has been a new modular driver system which has been useful for end-users for the past year or so. Several sound-card manufacturers have provided specifications to the ALSA programmers, enabling them to provide driver modules for previously unsupported cards.

    You aren't restricted to ALSA-aware software if you use the ALSA drivers; OSS-emulation modules are provided so that older and strictly-OSS applications can be run.

    A third sound development effort began as an offshoot of the Enlightenment window manager project. The Enlightenment Sound Daemon is intended to allow multiple digitized streams of audio to be played back by a single device. This is the daemon which provides the "system sounds" for Enlightenment. ESD can also play, record, and monitor sounds on remote machines. This project doesn't provide drivers for specific cards; its purpose is to act as an intermediary between the sound hardware and applications. ESD cooperates well with all three of the above driver families.

    This multiplicity of sound software might at first glance seem to be a confusing morass, but luckily most recent client software has been designed to make use of any or all of the various interfaces, either via compilation switches or command-line options.


    A Basic Sound Toolkit

    Introduction

    There are several command-line software packages which are both useful in their own right as well as providing services to GUI sound software. In some cases a GUI utility is an easier to use front-end for one or more of these console tools, often a welcome convenience when a tool has dozens of possible command options. Rather than supplying URLs for these packages, I refer you to Dave Phillips' comprehensive and up-to-date Sound and MIDI Software For Linux web-site, which offers links to a profusion of sound software for Linux, as well as commentary.

    SoX

    SoX has been around for several years now; originally created by Lance Norskog, it is now actively maintained by Chris Bagwell. SoX is both a file-converter and an effects utility. It can convert just about any sound-file format to any other, as well as optionally processing the sound in many different ways. Effects include various filters as well as several "guitar effects" such as phaser, chorus, flanging, echos, and reverb. SoX also serves as a sound-file player. As Chris Bagwell writes in the distribution README file,

    SoX is really only usable day-to-day if you hide the wacky options with one-line shell scripts.
    One such shell script, called play, is part of the SoX package; it supplies the options to the sox binary which enable it to be a sound-file playing utility. You may have already used it without knowing it was there, as many file managers call sox whenever a sound-file is double-clicked with a mouse.

    mpg123

    Though there are many flashy X Windows mp3 players out there, the humble command-line decoder/player mpg123 is still one of the fastest and most memory-efficient. Several of the GUI players call mpg123 to do the actual grunt work, while XMMS (formerly known as x11amp) now incorporates some of the mpg123 code internally rather than calling it as an external process. Like SoX, mpg123 has many command-line options. With these you can play an MP3 file in a great variety of ways, such as in mono, or at varying speeds. Mpg123 can also retrieve and play files directly from a web-site.

    Gramofile

    Though Gramofile does have an ncurses-based text interface, it's pretty spare, so I'll include it here. Gramofile was originally developed at the Delft University of Technology by Anne Bezemer and Ton Le as a means of capturing audio tracks from vinyl LPs and writing them to WAV files. Subsequently track-splitting and noise reduction were added, though both of these require some tinkering with settings to get good results. Gramofile is particularly useful to people (myself included) who have collections of old LPs and would like to burn tracks to CDs. This can be time-consuming; after the audio stream has been written it takes another block of time to split off individual tracks and run them through the pop-and-click-removal process. Through experimentation I discovered that Gramofile doesn't know or care if another Gramofile session is running on another console or xterm. While one copy of the program is busily sipping at the audio stream and depositing WAV files in its wake, another process can be splitting and filtering the files from the last run.

    I've had good results from simply patching my stereo amplifier's alternate speaker leads to the sound-card input jack. It takes some fiddling to get the amplifier's volume adjusted just right so that clipping and distortion don't occur. I generally keep a software mixer handy while setting up a session. While recording, Gramofile displays a simple level-meter which indicates whether the signal is too strong.

    Gramofile isn't limited to vinyl LPs; I've also transferred tracks from cassette tapes with good results.

    cdrecord

    During the past year or so CD-RW drive prices have plummeted. It's now possible to find even SCSI drives which cost less than two hundred dollars, and IDE drives at not much over one hundred. A couple of months ago my old 12x CDROM drive died and I saw this as a perfect excuse to replace it with a CD read-write drive. My search for Linux software to enable me to use the drive didn't take long -- the consensus on the net seems to be that Joerg Schilling's cdrecord package is robust and well-supported. Though numerous front-end packages have been written as wrappers for cdrecord, so far I've been using it directly. Eventually I'll probably switch over to using XCDroast or one of the others, but as a beginner I find cdrecord's verbose status messages (which are displayed on the terminal as the program burns a CDROM) reassuring. These messages are enabled with the -v option switch.

    Schilling's program is exceedingly versatile. Multi-session CDs (especially useful for data backup discs) are easily enabled, as well as blanking rewritable discs. Just about all recent drives are supported by the program.


    Sound Utilities For Musicians

    Introduction

    As a semi-proficient amateur guitar and fiddle player, I often find myself wondering just how particular licks and passages of recorded music are played. The players I listen to often play so quickly that distinguishing individual notes and their sequences can be nearly impossible for the unaided ear. Musicians have approached this problem in several ways. Back when vinyl LPs and multi-speed turntables were the norm, some would play 33-1/3 RPM discs at half speed. More recently specialized cassette tape machines have become available which are able to slow down the music without altering the pitch; this would be a boon to the aspiring musician if the machines weren't so expensive. It seemed to me that this was something my Linux machine ought to be able to do, so I began searching for software.

    Creating and Working With WAVE Files

    Whether the audio source is CDROM, tape, or LP, the first step is to create a file on disk which can be manipulated with software. Though historically on unix-ish systems the Sun *.au was the native sound-file format, these days it's more common for Linux software to be designed to work with the Microsoft WAV format. The two formats are nearly identical; both are mainly made up of PCM audio data with the WAVE files carrying extra header information. WAV files are huge, occupying about twelve megabytes per minute of playing time. There are several utilities which can write WAV files from either an audio stream or directly from an audio CD. cdda2wav, a console program which is bundled with Joerg Schilling's excellent cdrecord package, works well with most CDROM drives. Not only can it rip tracks or entire discs and convert them to WAV files, it can also play the files through a soundcard at any speed without writing the file to disk. Supplied along with cdda2wav is a script (originally by Raul Sobon and modified by Joerg Schilling) called pitchplay which simply calls cdda2wav with options which cause it to not write out a file and play a CD track at a specified percentage of normal pitch. As an example, pitchplay 6 50 will play track six of a CD one octave lower than normal.

    Another track-ripping package, cdparanoia, is intended for use with CDROM drives which read tracks erratically. Cdparanoia doesn't have as many options as cdda2wav, but with certain drives its error-correction is needed to produce WAV files which accurately reproduce the contents of an audio track.

    These command-line utilities don't have to be used in their bare form, as numerous GUI front-ends have been developed. One of the best I've encountered is Mike Oliphant's Grip program, an exceedingly stable and handy GTK-based front-end for not only the track-rippers but also mp3-encoders. Grip is not tied to any particular rippers and encoders. Any can be used; one of Grip's configuration screens allows the user to specify client programs as well as preferred command switches. Grip doubles as a CDDB-aware CD player, which makes it particularly well-suited for the musician. In the screenshot below notice the "Rip partial track" check-box. This allows you to rip just one segment of a track, perhaps a particular solo for study.

    Grip config screen

    What to do with these bulky WAV files now? Andy Lo A Foe has written a sound-file player called Alsaplayer which has several unique features. This player is designed to work with the ALSA drivers, the native Linux OSS drivers, and ESD. It can play WAV, MP3 and MikMod-supported module files as well as CD audio ripped digitally direct from disc. A variety of visualization scopes are implemented as plug-ins, including several FFT variants and a reworked version of Paul Harrison's Synaesthesia program. I was particularly impressed by the variable speed and direction controls, which work amazingly well. In the screenshot below you will see a slider control; it's the central one with the two triangular arrow buttons to the left of it:

    Alsaplayer

    As a sound file plays this slider lets you dynamically alter the speed and even cause the sound to instantly begin playing backwards (handy for finding those hidden secret messages!). I kept expecting the program to crash as I abused this control but it seems steady as a rock. This speed control works equally well with MP3 files. Now if I could just figure out a way to control it with my feet so I wouldn't have to put the instrument down!

    Not every WAV editor can deal with very large files. One program which can, and which can also play them at reduced speed without altering the pitch, is Bill Schottstaedt's snd program. Snd is a self-effacing program which doesn't look like much the first time it is run. Sort of like booting Linux for the first time and seeing a bash command prompt on a black screen. Snd, though, has layer upon layer of complexity which becomes apparent after reading the thorough and well-written HTML manual. Luckily the program's basic editing functions aren't too difficult to learn. Many of the keyboard commands are patterned after those of Emacs and are also available from the menu-bar. The feature which will be of the most interest to musicians is the ability to "expand" a sound-file. This is accessible when Show Controls is selected from the View menu. In the screenshot below the controls consist of the series of horizontal sliding control-bars beneath the main window:

    Snd Window With Controls

    As with Alsaplayer, the speed of playback can be controlled with the second bar, while the two small arrows to the right of the speed bar control the direction of play. But to a musician the third bar (Expand) is the most useful. From the manual:

    'Expand' refers to a kind of granular synthesis used to change the tempo of events in the sound without changing pitch. Successive short slices of the file are overlapped with the difference in size between the input and output hops (between successive slices) giving the change in tempo.

    This expansion works surprisingly well, though such processing does tend to highlight any noise or flaws in the original recording. For a musician just wanting to hear which notes inhabit a complex musical passage this is a wonderful feature. The mp3 player mpg123 can play mp3 files in a similar way. Using (as an example) the command mpg123 -h 2 [filename] will play each frame of the mp3 file twice, resulting in half-speed-same-pitch output. The output tends to be more distorted than that of snd expanding a WAV file, but this likely is a limitation of the lossy mp3 format.

    Snd is chock-full of capabilities which I haven't had time to explore yet. It's scriptable using the Guile scheme dialect. The recording window, featuring a set of simulated VU meters, can be used to record audio from multiple soundcards, microphones or any other source with the output written to a sound file. I find some new feature or capability each time I run the program. All this with a good manual!


    Sound Visualization With eXtace

    Introduction

    I couldn't wrap up this article without at least a mention of an impressive piece of "eye-candy", a program for sound visualization called eXtace. This is an addictive piece of software which was originally written by Carsten Haitzler (of Enlightenment window-manager fame) and Michael Fulbright, from Redhat. Its new maintainer, Dave J. Andruczyk, has recently given the program a new lease on life and it's well worth trying out.

    Installation and Usage

    eXtace relies upon ESD (the Enlightened Sound Demon) for its sound input and won't work without it. ESD is a small download and it's probably packaged on your distribution CD if you aren't running it already. ESD can be started with the command  esd -port 5001 & ; once it's running eXtace can be started up. Another requirement is the FFTW libraries (Fastest Fourier Transform In The West). Though this issue will probably be resolved by the time you read this, version 1.2.9 didn't seem to be able to find the libfftw libraries during configuration and thus the display of the 3D landscape and spike modes is minimal. Version 1.2.8 works well for me, and I recommend it. The source for all versions can be obtained from this site.

    Here is a screenshot of eXtace displaying a moment's worth of Thelonious Monk's Sweet and Lovely, in 3D landscape mode:

    Extace Control

    eXtace main window

    As with many such audio visualization tools, quieter music with few instruments seems to provide a more comprehensible display. A piece such as the slow introduction to Jimi Hendrix's blues Red House (from the Live At Winterland CD) is a good one to try.

    eXtace has several controls enabling you to tailor the display to your machine's capabilities and your own taste. Here is the Options window:

    eXtace Options Window

    The lag factor setting needs to be adjusted for each combination of sound-card and ESD version; it only needs to be tinkered with once, as these settings are saved between sessions.

    You will notice a small black window hovering above eXtace's main window. This one is great fun; by manipulating the white line by changing its direction and length with the mouse... just try it. It's sort of like riding a roller coaster.


    Copyright © 1999, Larry Ayers
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Linux on Token Ring

    By Eugene Blanchard


    I decided to implement Token Ring on one of my Linux servers because I had some time on my hands, a few MSAUs and a box of 3Com 3C619B Token Ring network cards. Not to mention a burning desire to run a Token Ring network for the past few years.

    This article will deal with:

  4. installing and configuring a 3C619B Token Ring network card in Linux
  5. simple routing from Token Ring LAN to an Ethernet LAN through a Linux server.

  6. Installing the NIC

    The first step was installing the NIC. This required opening the computer and finding a spare 16 bit ISA slot. No problem. In it went and I was one step closer to completion.

    The next step required testing the card. Unfortunately, most diagnostic programs that come with PC hardware run in DOS, so as a rule, I always allocate one 20 MB partition to DOS for storing them. Reboot to DOS and run the 3C619B configuration program called 3tokdiag.exe.

    At this point the card  should be connected to a MSAU (multistation access unit - sometimes referred to as a MAU) for proper testing. The MSAU can have either the original IBM hermaphroditic connectors, RJ45 or RJ11 connectors (click here for a good review of  Token Ring).  I used an IBM 8228 with hermaphroditic connectors. I connected my RJ45 cable to it using a Token Ring balun (small impedance matching transformer) which matches the 150 ohm impedance of STP to the 100 ohm impedance of UTP.

    I ran the diagnostic tests and bang, the MMIO test failed with an error about a memory conflict. So much for right out of the box luck. This meant that I would have to set the card's IRQs, base address and memory address (which I should normally have to do anyway).  A quick check of the Token Ring HOWTO and voilà, it says that the cards with the Tropic chipset (IC has Tropic written right on it) uses the ibmtr driver. The card's chipset was indeed the Tropic and away I went. Now for the configuration parameters.... here was were the problems started.

    The 3C619B card could be run in either 3Com mode or 100% IBM compatibility mode. To make a long story short, use the 100% IBM compatibility mode. Even though the settings are not clear, in my case the choices were for "primary or secondary" card which actually means which base address to use. The configuration parameters that Linux is looking for are:

    	Config mode:			IBM
    	I/O Base Address:		Primary	(means using 0xA20)
    	Int Req:			2 (9)		(16 bit cards use IRQ 9)
    	Ring Speed:			16 Mbps
    	Bios/MMIO Base Add:		D4000h	
    	shared RAM Address range:	D0000h	
    	Mem mode:			16 bit
    	I/O mode:			16 bit
    	IRQ Driver type:		Edge triggered
    	Auto Switch:			Enabled
    

    I am not sure what the MMIO address does but I know that with these values, the card passed all diagnostic tests fine. The big problem I had was in confusion between MMIO and Memory address. I had set MMIO address to 0xD0000 and this failed miserably.

    The first few tests check the internals of the NIC and the last test checks the lobe connection (between NIC and MSAU). The last test takes quite a long time to perform so be patient.

    NOTE: Now as far as I can tell, the ibmtr.c source code only allows the above settings (someone correct me if I'm wrong!). Unfortunately, the comment header of ibmtr.c doesn't indicate any configuration settings (oversight?). From what I can tell from ibmtr.c and testing that was performed over a period of 3 weeks (yes that is right - I was on the verge of giving up), these are the only values that will work.


    The Kernel and Token Ring

    The Linux kernel must be recompiled for Token Ring support. You can compile it in directly or as a module both methods work admirably. To compile the kernel, you change directories to /usr/src/linux and run either:

    I suggest that you use either menuconfig or xconfig. The "make config" method can be extremely unforgiving if you should make a mistake - you have to start all over again.

    The assumption at this point is that you have a working recompiled kernel and are only adding support for a Token Ring card. This means that the only change should be to add Token Ring support to the kernel. Go to Network Device Support section and select Token Ring Driver Support as either as compiled as part of the kernel (Y) or as a  module (M). I selected compiled as part of the kernel. Next select "IBM Tropic chipset based adapter support" (again Y or M - your choice). Save and exit and you're now ready to recompile the kernel.

    	make clean ; make dep ; make zImage
    	make modules
    	make modules_install
    

    I copied the zImage file to the root directory (I'm using slackware - you may need to copy it to /boot directory for other distributions):

    cp /usr/src/linux/arch/i386/boot/zImage /token-ring

    Now the new kernel was in place, it's time to add a new lilo entry.


    LILO and T.R. Kernel

    Since I wasn't sure how Linux would work with the new Token Ring card, I wanted to be able to boot to the old working kernel (non Token Ring). I added another entry into /etc/lilo.conf that would address the new kernel. At the lilo boot prompt I would have a new choice of which kernel to boot to. I modified /etc/lilo.conf with a simple text editor for the new kernel:

    	# LILO configuration file
    	#
    	# Start LILO global section
    	# location of boot device
    	boot = /dev/hda
    	# how long (1/10 of seconds) will the LILO prompt appear before booting to the first listed kernel
    	delay = 50
    	vga = normal
    	# End LILO global section
    	# Linux bootable partition configuration begins
    	# Original kernel config starts here
    	image = /vmlinuz	# name and path to kernel to boot to
    	  root = /dev/hda2	# which partition does it reside on
    	  label = linux		# the name that the LILO prompt will display
    	  read-only		# let fsck check the drive before doing anything with it - mandatory
    	# End of original kernel
    	# Token Ring kernel starts here
    	image = /token-ring
    	  root = /dev/hda2	# which partition does it reside on
    	  label = token-ring	# the name that the LILO prompt will display
    	  read-only		# let fsck check the drive before doing anything with it - mandatory
    	# End of Token Ring kernel
    	# DOS partition starts here
    	other = /dev/hda1	# which partition does it reside on
    	  label = dos		# the name that the LILO prompt will display
    	  table = /dev/hda
    	# End of DOS partition
    

    My DOS partition is on /dev/hda1 and Linux on /dev/hda2 with a swap partition on /dev/hda3 which is not mentionned in the lilo.conf file.

    After saving and exiting the /etc/lilo.conf. You must run lilo to enter the setttings. All that is required is to type "lilo" at the command prompt with root privilege. If everything was entered properly, you should see:

    	ashley:~# lilo
    
    	  Added	linux *
    	  Added	token-ring
    	  Added	dos
    
    	ashley:~#
    

    This indicates that everything went okay (ashley is the name of my server). The asterick indicates that linux is the default boot selection (first entry in lilo.conf).


    Token Ring Kernel and Boot Messages

    Since I compiled Token Ring support directly into the kernel, I didn't have to modify (usually just uncomment) or add support for the ibmtr driver in the /etc/conf.modules file. When I rebooted the machine, I closely watched for the following messages to scroll across the screen:

    	
    	tr0: ISA 16/4 Adapter| 16/4 Adapter /A (long) found
    	tr0: using IRQ 9, PIO Addr a20, 16 k Shared RAM
    	tr0: Hardware address: 00:20:AF:0E:C7:2E
    	tr0: Maximum MTU 16 Mbps: 4056, 4 Mbps: 4568
    	
    	tr0: Initial interrupt: 16 Mbps, Shared Ram base 000d0000
    	 
    	tr0: New Ring Status: 20
    	tr0: New Ring Status: 20
    	tr0: New Ring Status: 20
    	tr0: New Ring Status: 20
    	
    

    And its up.. It's quite stable and if you have a passive msau, you shoud be able to hear the relay click in for the ring insertion phase .

    If you see either of these error messages:

    	arrgh! Transmitter busy
    	Unknown command 08h in arb
    

    Then you have the wrong Shared Ram Address range configured on your card. Set it to 0xD0000h.


    Configuring the Interface

    Now that there was support for the Token Ring card in the kernel, the interface had to be configured. This means that the IP address, mask, broadcast address and default route must be set. In Slackware, the /etc/rc.d/rc.inet1 file is modified to add the following parameters. If you are just testing, you can type in the following parameters at the command prompt:

    	/sbin/ifconfig tr0 192.168.2.1 broadcast 192.168.2.255 netmask 255.255.255.0
    

    where:

    	tr0 is the first Token Ring adapter found
    	192.168.2.1 is the IP address of the interface
    	192.168.2.255 is the broadcast address of the interface
    	255.255.255.0 is the subnet mask
    

    At this point, you should type "ifconfig" by itself on the command line interface and you should see something like this:

    eth0      Link encap:Ethernet  HWaddr 00:A0:24:CC:12:6F
              inet addr:192.168.1.3 Bcast:192.168.1.255  Mask:255.255.255
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:
              RX packets:53775 errors:0 dropped:0 overruns:0 frame:
              TX packets:7489 errors:0 dropped:0 overruns:0 carrier:
              collisions:0 txqueuelen:100
              Interrupt:10 Base address:0xe800
    
    tr0       Link encap:Token Ring  HWaddr 00:20:AF:0E:C7:2E
              inet addr:192.168.2.1 Bcast:192.168.2.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:4500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:100 
              Interrupt:9 Base address:0xa20
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1 Mask:255.0.0.0
              UP LOOPBACK RUNNING MTU:3924  Metric:1
              RX packets:235 errors:0 dropped:0 overruns:0 frame:0
              TX packets:235 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
    

    Notice that both the ethernet, loopback and token ring interfaces are listed. It is very important to make sure that the Ethernet and Token Ring adpaters are on separate IP networks. In this example, eth0 is on subnet 192.168.1.0 and tr0 is on subnet 192.168.2.0.

    At this point you should be able to ping your linux box from the token ring network. Symptoms of a wrong NIC configuration is if you can ping localhost and the linux network card address (like 192.168.2.1) from within the Linux server fine but when you ping anything outside of the linux server (such as other LAN hosts) you get the error messages listed above.


    Routing from Token Ring to Ethernet

    There are two methods that can be used to connect Ethernet networks to Token Ring networks. The first method uses the Data Link layer of the OSI model and is called a translation bridge. There are several major differences between the two MAC frames, one of the most significant is the tranmission of most significant bits (MSB) of a byte. Token Ring transmits the least significant bit (LSB) first while ethernet transmits it in reverse order with the MSB first (or vice versa depending on if you are a Token Ring guy or Ethernet guy). Unfortunately, Linux doesn't support translation bridging for a very good reason (see next paragraph).

    The second method uses the Network layer (IP layer) and is called routing. Both Ethernet and Token Ring protocol stacks already deliver their data to the Network layer in the proper order and in a common format - IP datagram. This means that all that needs to be done to connect the two LAN arbitration methods is to add a route to our routing table (too easy!).

    Since our ethernet routing is already working including default gateway. I only had to add the following line to /etc/inet1. To test type at the command line:

    	/sbin/route  add - net  192.168.2.0 netmask 255.255.255.0
    

    Any packet not addressed to the Token Ring network 192.168.2.0 is forwarded to the Ethernet network. I used a similar route on the Ethernet side and everything not addressed to the Ethernet network 192.168.1.0 was sent to the Token Ring network.

    To verify that everything still works from the Linux box:

    To verify that routing is working, try to ping across the Linux server from an Ethernet host to a Token Ring host and vice versa.

    NOTE: This is a very simple routing example. Only two LANs are being used: 192.168.1.0 and 192.168.2.0. Your situation will most likely be more complicated. Please see the man pages on routed for further information.


    Token Ring Problems

    While Linux ran beautifully with Token Ring, I can't say the same about Win95. The biggest problem that I ran into was the fact that Win95 performs a software reboot whenever its configuration is changed or when most new software is installed. While this isn't a problem with Ethernet, it is a problem with Token Ring. Token Ring has many maintenance and administration duties implemented in the network card itself. The network card requires a hard boot to reset not a soft boot.

    The results were that the Win95 clients would lose their network connections (specifically the network stack to the NIC) and hang during soft boots - very frustrating. Add any new software especially if it is a network install and bam, down goes Win95 - hung again. I would have to shut off the PC and reboot. I never realized how often you have to reboot Win95 until I implemented Token Ring on it. I would not want to administrate a Token Ring network on Win95 for a living.

    This is not a Token Ring fault but a Win95 fault as far as I can tell. I was using Win95a so perhaps later versions have addressed this problem and corrected it. Linux did not have any problems of this nature.


    Copyright © 1999, Eugene Blanchard
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Introduction to Socket Programming

    By Pedro Paulo Ferreira Bueno and Antonio Pires de Castro Junior


    Introduction

    Most operating systems provide precompiled programs that communicate across a network. Common examples into the TCP/IP world are web clients(browsers) and web servers, and the FTP and TELNET clients and servers. Sometimes when we are using this utilities of the internet we don't think about all the process involved. To better understand this aspects we, in our research group(GTI, Grupo de Tecnologia em Informática) at Goias Catholic University (Universidade Católica de Goiás), decide to build, write our own network programs, mini-chat, using the basic structure about sockets, an application program interface or API, that mechanism that make all this communication possible over the Net.

    We examine the functions for communication through sockets. A socket is an endpoint used by a process for bi-directional communication with a socket associated with another process. Sockets, introduced in Berkeley Unix, are a basic mechanism for IPC on a computer system, or on different computer systems connected by local or wide area networks(resource 2). To understand some structs into this subject is necessary a deeper knowledge about the operating system and his networking protocols. This subject can be used as either beginners programmers or as a reference for experienced programmers.

    The Socket Function

    Most network applications can be divided into two pieces: a client and a server.

    Creating a socket

    #include <sys/types.h>
    #include <sys/socket.h>
    

    When you create a socket there are three main parameters that you have to specify:

    int socket(int domain, int type, int protocol);
    

    The Domain parameter specifies a communications domain within which communication will take place, in our example the domain parameter was AF_INET, that specify the ARPA Internet Protocols The Type parameter specifies the semantics of communication, in our mini chat we used the Stream socket type(SOCK_STREAM), because it offers a bi-directional, reliable, two-way connection based byte stream(resource 2). Finally the protocol type, since we used a Stream Socket type we must use a protocol that provide a connection-oriented protocol, like IP, so we decide to use IP in our protocol Type, and we saw in /etc/protocols the number of ip, 0. So our function now is:

    s = socket(AF_INET , SOCK_STREAM , 0)
    
    where 's' is the file descriptor returned by the socket function.

    Since our mini chat is divided in two parts we will divided the explanation in the server, the client and the both, showing the basic differences between them, as we will see next.

    The Mini-chat Server structure

    Binding a socket to a port and waiting for the connections

    Like all services in a Network TCP/IP based, the sockets are always associated with a port, like Telnet is associated to Port 23, FTP to 21... In our Server we have to do the same thing, bind some port to be prepared to listening for connections ( that is the basic difference between Client and Server), Listing 2. Bind is used to specify for a socket the protocol port number where it will be waiting for messages.

    So there is a question, which port could we bind to our new service? Since the system pre-defined a lot of ports between 1 and 7000 ( /etc/services ) we choose the port number 15000.

    The function of bind is:

    int bind(int s, struct sockaddr *addr, int addrlen)
    

    The struct necessary to make socket works is the struct sockaddr_in address; and then we have the follow lines to say to system the information about the socket.

    The type of socket
    address.sin_family = AF_INET /* use a internet domain */
    The IP used
    address.sin_addr.s_addr = INADDR_ANY /*use a specific IP of host*/
    The port used
    address.sin_port = htons(15000); /* use a specific port number */

    And finally bind our port to the socket

    bind(create_socket , (struct sockaddr *)&address,sizeof(address));
    

    Now another important phase, prepare a socket to accept messages from clients, the listen function is used on the server in the case of connection oriented communication and also the maximum number of pending connections(resource 3).

    listen (create_socket, MAXNUMBER)
    

    where MAXNUMER in our case is 3. And to finish we have to tell the server to accept a connection, using the accept() function. Accept is used with connection based sockets such as streams.

    accept(create_socket,(struct sockaddr *)&address,&addrlen);
    

    As we can see in Listing 2 The parameters are the socket descriptor of the master socket (create_socket), followed by a sockeaddr_in structure and the size of the structure.(resource 3)

    The Mini-chat Client structure

    Maybe the biggest difference is that client needs a Connect() function. The connect operation is used on the client side to identify and, possibly, start the connection to the server. The connect syntax is

    connect(create_socket,(struct sockaddr *)&address,sizeof(address)) ;
    

    The common structure

    A common structure between Client and the Server is the use of the struct hostent as seeing in Listing 1 and 2. The use of the Send and Recv functions are another common codes.

    The Send() function is used to send the buffer to the server

    send(new_socket,buffer,bufsize,0);   
    

    and the Recv() function is used to receive the buffer from the server, look that it is used both in server and client.

    recv(new_socket,buffer,bufsize,0);
    

    Conclusion

    Since the software of the TCP/IP protocol is inside the operating system, the exactly interface between an application and the TCP/IP protocols depends of the details of the operating system(resource 4).In our case we examine the UNIX BSD socket interface because Linux follow this. The Mini-chat developed here is nothing more than a explain model of a client/server application using sockets in Linux and should be used like a introduction of how easy is to develop applications using sockets. After understand this you can easily start to think about IPC (interprocess Communication), fork, threads(resource 5) and much more. The basic steps to make it work is:

    1. Run the server
    2. Run the client with the address of the server
    Amazing, dont you think?

    This example was the start of our server program in our last project, a network management program. Here are the source listings:

    Resources

    1. Operating Systems , Harvey M. Deitel , 1990
    2. Socket Linux Man Page
    3. Network Functions in C - Tutorial
    4. Internetworking with TCP/IP Vol1 - Doulgas Commer
    5. Unix Network Programming , Vol2 , Richard Stevens
    6. Unix Network Programming, Vol1, Richard Stevens


    Copyright © 1999, Pedro Paulo Ferreira Bueno and Antonio Pires de Castro Junior
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Multiboot MS-DOS 6.22 - Windows98 - Windows NT Server 4.0 - Linux

    By Tom de Blende


    The original version of this HOWTO is maintained at http://bewoner.dma.be/BeversHP/multiboot.html.
    Most up-to-date version of this HOWTO can be found there. Questions and remarks can be sent to me. I'll try to find an answer to them, mail you the answer and place them under my FAQ-section. You might find me on IRCNET on channels #belgium and #flanders.

    I'm going to explain to all it may concern how I created a multiboot-environment on my PC. It did cost me a lot of effort and time-consuming (re)installations of Windows98 and Windows NT. Especially the latter takes a serious amount of time to be installed. The purpose of this HOWTO is trying to save you this hassle. The only thing I can say is that it worked on my configuration, and that it will probably work on yours too. But as always: you can never be sure. There are already some HOWTO's available on this subject, but most of them are a bit outdated, since they don't cover the problems you might experience using the FAT32 filesystem. And since this is the filesystem Windows98 uses by default, I guess the time is right for this HOWTO.

    The aim was being able to boot directly into MS-DOS 6.22, Windows98, Windows NT Server 4.0 (I've used Windows NT Server 4.0, but you can also use NT Workstation and all versions of NT from 3.51 on) and finally Linux Redhat 6.0 (but other Linux distributions shouldn't cause any problems; I also tried it with SuSE 6.0). If you do not want all these operating systems, that shouldn't be a problem. Since we are using the NT boot loader to load every operating system, the only restriction is that you want to install Windows NT. If you don't want NT but only Linux and MS_DOS or/and Windows9x installed, that is possible too. You can boot Windows9x, MS-DOS and Linux without any problems from Lilo (there are other HOWTO's which handle multibooting from Lilo, see below for details). Problems arise when you want to add NT (since Linux and especially NT are very fond of the Master Boot Record). And integrating these two (or more) on one system is the main objective of this HOWTO.

    In advance, you should always outline a solid partition scheme! This may differ a lot from mine. I used two hard disks, but it is possible to add everyting on one disk or divide it onto three disks or even more. Just read this HOWTO very carefully, and you'll find every information you need. If this ain't the case, just send me an email.

    Just to make sure, I will give you a description of my PC-configuration:

    For those of you who want to run DOS, Windows98 and NT 4.0, all on a FAT16 filesystem: that's a piece of cake. The purpose of this HOWTO is to explain how to run all these operating systems on one system, each using its own filesystem. This means FAT16 for DOS 6.22, FAT32 for Windows98, NTFS for Windows NT and EXT2 for Linux. Another aim was to create such a multiboot environment without using (expensive) bootmanagers. We will be using the NT OS Loader, which comes along with NT 4.0. It's free (provided you have the NT Server installation disk of course) and able to do the job.

    I used two disks, adding DOS Windows98 and Windows NT to the first and Linux to the second. I also created a 1 gig FAT16 partition on the second disk which can be shared between different operating systems. If you are only using one disk, you can also use this HOWTO, you just have to make some minor modifications. This should be easier for you really. The end result on my PC looks like this:

    Disk one

    Primary – active- partition: MS DOS 6.22 ~ FAT16 ~ 400 mb
    First logical drive: NT Server 4.0 ~ NTFS ~ 1.7 gig
    Second logical drive: Windows98 ~ FAT32 ~ 4.1 gig

    Disk two

    Primary partition: Backup- and sharespace ~ FAT16 ~ 1 gig
    Rest of this disk: Linux ~ EXT2 ~ 2 gig (more details on Linux partition scheme later on)

    You can see the end result of my partition scheme (using the NT Disk Administrator to make it all look nice) here. The Unknown partitions are EXT2 Linux partitions. NT cannot identify them.
    First of all it's important to know that not every operating system can read every filesystem. Windows NT cannot read FAT32 and Windows98 cannot read NTFS. There are in fact some free- drivers available which make it possible for NT to read FAT32 and for Windows98 to read NTFS. But that's the only thing they can do: read. In some cases it is possible to have read and write capacities, but that will cost you some registration-money and it's still not possible to boot them from another filesystem.
    Linux is evolving in such a way that it will be able to access NTFS in the near future, but at this moment things are a little unstable. FAT32 shouldn't be a problem. It is rather ironic that Linux can read FAT32 (created by Microsoft) but Microsoft products like NT can't… The only filesystem that all those operating systems can read/write to is FAT16.

    You might ask yourself why -since Windows98 has it's own version of DOS: 7.0- I installed DOS 6.22 on the primary active partition. The answer can be found in the previous paragraph. Let's assume you want to install both Windows98 and Windows NT onto one disk, divided in two partitions: An operating system is always booted from the active partition (c:). So both your Windows98 and your Windows NT startup files will end up on the same disk (c:). If you install Windows98 on the first partition, NT won't be able to read its own startup files because it can't handle FAT32. If you install Windows NT on the first partition, Windows98 will fail booting because it can't read its startupfiles on the NTFS partition. This all doesn't apply to you if you use FAT16 for all operating systems. But like a said: that's child's play.
    So this is why your primary -active- partition needs to be FAT16. The easiest way is to make that partition large enough so NT and Windows98 can store their temporary installation files on that disk. If this is too complicated for you, it comes down to this: "your primary -active- partition on your first hard disk MUST be FAT16!". Because of this I decided to add DOS 6.22 on the primary partition and make my system DOS-bootable. It's not absolutely necessary to install DOS, but I'd advise you to do it.

    That was my first rule. While we're at it, here is my second rule: "never create partitions using Fdisk or NT Disk Manager to create a partition/logical drive for another operating system". What do I mean? If you want to create you DOS partition: use the fdisk you find on the DOS installation disks. If you want to create your NT partition: do this during NT install. Create your Windows98 partition using the fdisk that's on the Windows98 bootup disk. Our Linux-partitions will be created using the fdisk-procedure in YaST. I've tried a lot of combinations, and none of them worked fine: operating systems that are refusing to start, partitions that cannot be converted anymore, etc…

    So how did I do it? I installed the operating systems in the following order: DOS, Windows98, NT 4.0, Linux. I'm not saying this is the only way to go. I'm just saying that this works.

    There are some things you need in order to succeed:

    Below you'll find a step by step manual to install all operating systems:

    1. Make sure your first hard disk is properly installed and that it is completely unpartitioned. You can check this by running fdisk (option 4 in the fdisk-menu). If you cannot access c:, chances are your hard disk is unpartitioned, but it might very well be unformatted as well so always check it with fdisk. If there are any partitions, delete them (your primary partition as well). It is not necessary to have your second hard disk inserted yet. It should –however- be inserted by the time you are going to install Windows98!!! The reason for this is quite simple: if the second drive isn't inserted, Windows98 will be installed on the d: drive. Now when you insert your second hard disk, this will become automaticaly the e: drive. Fysical drives have priority to logical drives and extended partitions. If all the Windows98 files are suddenly situated under another drive-letter, Windows98 will refuse to boot. There is just one way to prevent this from happening (besides adding the second hard disk before installing Windows98): creating no primary partition on the second disk, only extended/logical partitions. The choice is up to you.

    2. Insert your bootable install disk of DOS 6.22. If the blue screen appears, don't choose to install DOS yet. Just exit installation by pressing F3 twice. First of all we are going to clean out our Master Boot Record. You can do this by typing:
      a:\> fdisk /mbr
      
      at the DOS command prompt. We are about to create our DOS-partition:
      a:\> fdisk 
      
    3. Now you can create your primary DOS-partition. I made it 400 meg. Since this partition is created with the fdisk of DOS 6.22, this partition automatically is a FAT16 partition. Don't forget to make this partition active! Just leave the other free space on the disk unpartitioned!!!

    4. Reboot your pc for changes to take effect. Again boot from the DOS install disk. Now you can choose to install DOS. The install wizzard will ask you whether it should format the freshly created drive. Do this and just install DOS onto your c-drive. If you're not sure HOWTO install DOS, I suggest you stop reading now and find yourself another hobby.

    5. Time has come to install Windows98. Insert your Windows98 bootdisk. Boot from the disk like you always do. It's not necessary to boot with cdrom-support just yet. Run the fdisk utility:
      a:\> fdisk 
      
      Fdisk will tell you that it has found a large disk, and ask you whether it should use support for large disks or not. I wanted a partition of 4.1 gig for Windows98, so I said yes. You should always choose yes if you want a partition that's bigger than 2.1 gig. Now create an extended partition which covers the rest of your hard disk. Create your logical drive for Windows98 within the extended partition. Leave the rest of the disk unpartitioned!!! Turn your pc off.

    6. AT THIS TIME YOUR SECOND HARD disk MUST BE CONNECTED TO YOUR SYSTEM. This doesn't apply to you if you only use one hard disk or if your second hard disk hasn't got a primary partition. I've created a 1 gig FAT16 primary partition on that disk. I use it to store all my files that I need in different operating system, and strongly advise you to create a FAT16 drive as well but that's up to you and has no influence on the rest of this HOWTO (you can also use your c: drive for this). I, for example, store my Netscape mail profile and My Documents on that disk (they can be shared between NT and Windows98).
      Reboot your pc. Don't forget to add cdrom-support when you reboot. I guess it's quite obvious that you'll have to format your freshly created FAT32-partition, but just to be on the safe side: now you must format your new partition. That partition will now have driveletter e: assigned!
      a:\> format e:
      
    7. Everything is ready now to install Windows98. Just switch to your cdrom drive letter (it should be g: since Windows98-boot disk created a RAM-drive on f:) and type setup. I experienced some problems here regarding the default scandisk during setup. Windows found an error (which wasn't an error really) on my second hard disk. Although setup told me that I could select the option "continue" later on, if I knew for sure that everything on that disk was allright, I never had the chance to do this. Setup wasn't prepared to continue due to errors (and I know for sure that it wasn't an error but just a partition a diskmanager had created) on the second disk. But this disk had to be inserted in order to install Windows on e:. I've searched and searched in the *.txt files on the Windows98 disk, and this is how you can avoid the standard scandisk and fly straight into the setup itself:
      g:\> setup /is
      
      Now you can continue installation. Don't forget to change the destination drive for your Windows98 system files! Just perform a normal Windows98 install. When this is finished, check if you're able to boot in DOS 6.22. You can do this by pressing F8 at the beginning of the Windows98 boot-process. When you've got you're mini-dualboot DOS-Windows98, it's time for the big brother: NT.

    8. Now there are two possibilities. If you already have your three NT install disks, than you can jump to 10. If you don't have these disks, you should create them by changing to the \i386 folder on your cdrom and typing
      f:\i386> winnt /ox
      
      at the command prompt, where f: is the cdromdrive letter. You can do this by booting into dos (your DOS cdrom-drivers have got to be installed to do this) or you can open a dosbox in Windows98. Make sure your three disks are empty, high density and formatted.

    9. Now you are ready to install Windows NT. The good news is that the disks you've just created, won't be used. The reason why I've let you make those disks is that they come in very handy when you are experiencing problems or to repair an NT installation. When you do a normal NT install (without switches), NT will ask you to make those disks, and also boot from it to continue the install. Creating those three disks and later on loading the install program from those same disks takes a lot of time.

    10. When you already have your disks (either you've just created them or you had them all along), you can save a lot of time by typing:
      f:\i386\winnt /b
      
      at the command prompt (best thing to do is to boot straight into DOS, although I think it's possible in Windows98 too in case you haven't got the appropriate cdromdrivers), where f is the cdromdriveletter. NT will now start the setup, skipping the disk-thing.

    11. I'm not going to run through the whole install procedure here. Just a few things you must pay attention to. In the textbased setup stage you will be asked where you want the NT-files to be stored. You'll be able to choose between installing NT on your c: drive or creating a new partition in the unpartitioned space. It goes without saying that you should create a new partition. You do this by selecting the unpartitioned space and pressing "c". Now you can enter the required size of your NT-drive (I made mine 1.7 gig). Also don't forget to convert your NT-drive to NTFS and format it. Continue the installprocedure. Reboot your pc when finished.

    12. If all went well, there should be an entry in the NT OS Loader for MS-DOS and for Windows NT (normal mode and safe/vga mode). Try booting in DOS and NT to see if everything still works OK. If you select the "MS-DOS" option in the loader, Windows98 will be started. When you don't want to boot into Windows98 but straight into DOS, the old rule applies: press F8 as fast as you can after selecting "MS-DOS" and pressing the enter-key. There is a way to add both DOS and Windows98 in your bootmenu.

    13. That's it! You've got your dualboot (it's even a small multiboot) into DOS, Windows98 and Windows NT. If all went well, every operating system should use its own filesystem. You can check this. You can see now that in Windows98 there is no additional drive where you can see the Windows NT files. When you boot into Windows NT you will see the Windows98 drive, but you won't be able to access it. If you can access this drive, or you can access the Windows NT drive in Windows98, then you've got a problem. Filesystems are not correct.

    14. You might find it rather annoying that you cannot access those drives. The good news is that you can use two little programs in order to make those drives accessible (read-only!). To be able to read your Windows98 drive in WindowsNT you should download the program FAT 32. If you want read/write access you should register the program (and pay). If you want access to your WindowsNT drive in Windows98, you should download the program NTFSDOS. This program is also offering you read-only capacities only. There is no read/write version available though. Downloading these programs is just a hint, you don't have to do this now, but it can be useful. It won't affect your multiboot in any other way, and they are not necessary for continuing this HOWTO.

    15. Time has come to add your final operating system to your pc. I've installed Linux Redhat 6.0. First of all you should check whether your multiboot environment is functioning without Linux. If this is the case, boot your pc using the Linux boot disk. I'm not going to give you an exhaustive manual on HOWTO install Linux. I couldn't be doing a better job than the manual which comes along with this software.

    16. It is very important to have a solid partition scheme. I had about 2 gig available for Linux on my second disk.

      This is how my partition scheme looks like:

      This is what we already had:
      hda1 (primary partition on first disk): MS-DOS 6.22 (400 mb)
      hda2: extended partition (5.8 gig)
      hda5 (first logical drive within hda2): Windows98 (4.1 gig)
      hda6 (second logical drive within hda2): Windows NT (1.7 gig)
      hdc1 (primary partition on second disk): FAT16 drive I mentioned earlier for sharing between operating systems

      This is what I've created in Linux:
      hdc2 (primary partition on second disk): /boot (10 mb)
      hdc3: extended partition (2000mb)
      hdc5 (first logical drive within hdc3): / (300 mb)
      hdc6 (second logical drive within hdc3): swap (128 mb)
      hdc7 (third logical drive within hdc3): /home (100 mb)
      hdc8 (fourth logical drive within hdc3): /usr (rest, about 1.5 gig)

      You can create this partition scheme during Linux install. Don't forget to format these partitions and to mount your other -non Linux- drives as well. It's also highly recommended to make a bootdisk for your fresh Linux installation.

    17. The tricky part is configuring Lilo. You must keep Lilo OUT OF THE MBR! The mbr is reserved for NT. If you'd install Lilo in your mbr, NT won't boot anymore. I've placed the Lilo configurationfiles in the /boot partition (hdc2 or /boot). You should point Lilo to the root partition (/ or hdc5) instead of the mbr when you are prompted to specify where the bootsector must be created. I first specified the bootpartition here (/boot or hdc2). That looked quite obvious to me. The result was that Linux was unable to boot. So you MUST specify your root here. YaST will also ask you whether it should activate this partition. DON'T let YaST activate this partition. Your c: drive must remain your active partition, otherwise NT won't boot anymore.

    18. If Linux is properly installed, time has come to add all the operating systems to the NT bootmenu. I've used the excellent (freeware!!!) program BootPart for this. When you run bootpart from a dosbox in NT, you'll get an overview of all your partitions again. This is quite a list by now. To add both Windows98 and DOS to the menu (without using the boring F8 key) just type these commands from your command prompt in a DOS-box in NT:
      c:\bootpart> BOOTPART DOS622 C:\BOOTSECT.622 "MS-Dos 6.22"
      c:\bootpart> BOOTPART WIN95 C:\BOOTSECT.W95 "Windows 98"
      c:\bootpart> BOOTPART REWRITEROOT:C: 
      
      As simple as that. Maybe you could try rebooting your system, and see whether this is working or not. Our multiboot system is nearly finished by now. A Linux entry in the NT bootmenu will complete things. Type
      c:\bootpart> bootpart
      
      at the command prompt in your DOS-box in NT. This will result in a list of all your partitions. There you should search for the number of your Linux root partition. When you know this one, you just type
      c:\bootpart> BOOTPART $linuxpartition$ BOOTSECT.LIN Linux Redhat 6.0 
      
      in your bootpart directory. You should fill in the corresponding partition number where I typed $linuxpartition$. This should be the partition where you created the bootsector (your root or / partition!). If all goes well, an entry will be made in your boot.ini and thus in the bootmenu from NT.

    19. That's it! Everything should be fine by now. If you need some more info on BootPart, you can read the readme.txt file that's included in the zipfile. If you don't want to use this (really easy to use, free and excellent) software, you can do all this manually. I'm not going to explain to you how (it's really not worth the hassle). You should visit http://www.windows-nt.com/multiboot/directboot.html for more info on this matter.

    Hopefully my efforts here were not in vain. I tried to give you as much details as possible, without going to much into detail. If something (for some reason or another) isn't working on your system, or I made a mistake, please let me know and help me keeping this info as good as possible. You can always mail me at bever@phreaker.net.

    http://www.windows-nt.com/multiboot/directboot.html
    http://metalab.unc.edu/LDP/HOWTO/mini/Linux+NT-Loader.html
    http://venus.kulnet.kuleuven.ac.be/LDP/HOWTO/mini/Multiboot-with-LILO.html
    http://world.std.com/~mruelle/multiboot.html
    http://metalab.unc.edu/LDP/HOWTO/mini/Linux+Win95.html
    ftp://sunsite.unc.edu/pub/Linux/docs/HOWTO/unmaintained/mini/Linux+DOS+Win95
    http://www.bcpl.lib.md.us/~dbryan/directboot.html
    http://hpmag.cern.ch/computing/dict/b/boot/index.html

    If you know other interesting pages regarding this subject, or you have any comments, please feel free to contact me.

    Last update: September 25, 1999


    Copyright © 1999, Tom de Blende
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    rms @ UBC

    By Eric Hayashi


    Table of Contents

    Introduction
    Thursday morning
    Stallman
    O'Reilly
    Conclusion


    Introduction

    Inspired by the informative and entertaining write-ups of Stephen Adler, most relevantly "An Ode to Richard Stallman" (LG #37) I recently took it upon myself to document Stallman's foray into the Great White North. First a brief introduction. I'm relatively new to Linux - primarily a Windows and Unix user until about a year ago thanks to the Linux box that I started using as a graduate student in astronomy at the University of Victoria. Since then I've been using Linux every day (and loving it!), while developing a budding interest in the history of Linux and the free software movement.

    About a month ago, while checking out the VLUG links page I happen to stumble across the linux.bc.ca website. Just in time as it turns out, since Richard Stallman is scheduled to talk on "Freedomware: The GNU/Linux System and the Free Software Movement" in Vancouver at the University of British Columbia on Thursday, September 23. As an added bonus, Tim O'Reilly is speaking on Friday, so if I stay I'm thinking I'll get the full spectrum of opinions on free software and documentation. Flash forward to...


    ...Thursday morning

    In typical Hayashi fashion, I've managed to make things interesting from the get-go by sleeping in by an hour. Unfortunately, the life of an astronomy grad student has done nothing to improve my predisposition towards getting up on time. Oh well, it's not the end of the world: I'll just have to take the 12:00 noon ferry to the mainland instead of the 11:00 as I'd planned. This still gives me about two hours to get from the ferry terminal to UBC. Plenty of time, but I decide to play it safe and take the coach that runs directly from downtown Victoria, via the ferry, to downtown Vancouver. Twice as expensive as public transit, but faster and half as stressful since you don't have to worry about bus schedules, transfers, correct change, etc. My anxiety level drops by half once I'm in the lineup to board the coach with ticket in hand.

    I start thinking ahead to Stallman's lecture this afternoon. I'm pretty excited about going to see the man behind GNU, not to mention Emacs, the greatest editor of all-time, ever. Maybe I'll even get to meet the man afterwards... Ulp! From what I've read rms can be a somewhat intimidating fellow. I can just imagine myself saying something foolish to draw his ire. "GNU/Linux," I start repeating to myself. "Not just `Linux,' `GNU/Linux'"!

    Pretty soon the bus is parked onboard the ferry and we're shuffled up to the passenger decks. Normally I'm not a big fan of the ferry ride between Vancouver and Victoria. Usually I'm traveling alone and just want to read or sleep but can never find a quiet place to do either. There's always someone nearby talking just loud enough to be a distraction. This time around it isn't that bad, though. I think the key is to spend as much time as possible outside on deck. The morning clouds are starting to burn off and the Gulf Islands can look quite spectacular under a little sunshine. I sit down on a bench, eat a couple sandwiches and snap some pics. Life is good.

    The ferry hits land at about 1:40 and the coach drops me off at Cambie and Broadway at 2:30 with plenty of time to spare. I hop on the 99 B-line express that goes west to the university. So there I am standing near the back, minding my own business when I overhear the words "Red Hat" and "Debian" in a conversation behind me. There's an empty seat next to one of the guys talking so I grab it and ask if they're going to the Stallman lecture. Turns out they are - they're comp sci students from nearby Simon Fraser University (SFU). One guy's got a 3 1/2'' floppy in his hand - hoping for an autograph perhaps? He says it's a Linux boot disk with nethack on it. They seem like pretty cool hacker-types and we end up chatting for the rest of the bus ride.

    We get off at UBC and after wandering around campus for a while we finally arrive at Woodward IRC lecture hall 2. It's still fairly early yet - there's only a handful of people scattered about the lecture hall. We grab some centre seats about a dozen rows back. One of the guys, Ryan, whips out a laptop, fires up Debian, and starts an X-window session with fvwm as the window manager. (Later we watch in horror as a guy near the front starts up Windows on his own laptop ("wanker"!)). They start playing some game with flying triangles ("bratwurst"?) and a command-line syntax that looks Lisp-like. After a little hacking one of the guys gets a triangle to rotate. Cool!

    Finally we catch our first glimpse of Stallman. He looks a lot less imposing than I'd imagined him. (In my mind I'd pictured an immense being with limbs like redwoods and a voice like thunder.) Despite his reputation, I find later that he's surprisingly easy to talk to and generally quite gracious, especially to people asking very basic questions about GNU. He's constantly fiddling with his hair when he's answering a question (looks like he's checking for loose ends) but as long as you have something interesting to say, you have his full attention.


    Stallman

    The lecture gets underway, and I start scribbling. (Unlike Stephen Adler, I'm forced to take notes the old fashioned way, with pen and paper. On the bright side, I don't have to worry about spilling coffee on my non-existent laptop.) Dr. Rabab Ward, director of the Center for Integrated Computer Systems Research (co-sponsors of the event along with VanLUG) introduces comp sci prof Ed Casas who starts telling us about rms until Stallman complains "You're giving my whole speech!" Thus, the introduction gets cut short and at last rms steps up to the podium.

    The first half of his talk is a retelling of the history of the GNU Project that appears on the GNU website, so I won't bother with a detailed recap. (A complete transcript of my notes appears here.) Even though it was a familiar tale (for me anyway) it was cool to hear it from the man himself. Along the way he extolled the virtues of living cheaply and not being "a slave of a desperate need for money" with expensive habits like "stamps, art, and children!" I guess we won't be seeing any little Stallmans running around anytime soon... He went on to say that as president of the Free Software Foundation (FSF), he decided not to take a cut of the money raised by FSF, since paying himself would be "like throwing money away, because we can get Stallman to work for nothing." So if we like the software he has helped to develop, we could either donate money to FSF or to Stallman himself. Hmm...

    In explaining the four freedoms which define free software, he compared new measures being adopted by the US government to deter prohibited copying to those employed by the Russian establishment, and went on to conclude that "nothing but a police state can possibly stamp out freedom 2 [the freedom to redistribute copies so you can help your neighbour]." After describing freedom 3, the freedom to publish an improved version of a program, he mentioned that the Open Source Initiative (OSI) promotes free software by concentrating solely on the benefits of freedom 3. Stallman believes that in doing this OSI is leaving out the most important things GNU has to say, and that, while GNU and OSI are allies with respect to software development, they remain "rivals in the domain of philosophical debate." He also talked about how software can be free for some users and not others, using the licensing of the X Window System as an example (see "The X Windows trap").

    He devoted the last part of his talk to issues which must be addressed in order to ensure the continued existence of a free OS five years down the road. First up: the problem of hardware products whose specifications are kept secret by their manufacturers and that can only be operated via proprietary software. The solution to this problem is twofold: 1) discourage people from purchasing hardware that is not supported by free software, and 2) reverse engineer the non-free drivers and write free ones. Secondly, he talked about the pitfall represented by using non-free libraries as a basis for free software development. The obvious example of this is the Qt GUI toolkit used by KDE. GNU is attacking this problem by developing the GNOME desktop environment, as well as the FreeQT toolkit Harmony. Again, Stallman stressed that it is easy to stay out of this trap if you recognize it as an issue. Finally, he made brief mention of the dangers posed by patents, and the patenting of software features and algorithms (e.g. the GIF patent held by UNISYS).

    Stallman concluded the lecture by arguing that the Linux community and the Open Source movement endanger the future of free software by failing to recognize the value of the freedom it affords. He cited ads for proprietary software in Linux magazines as an example of encouraging users to give up the freedom they've won by using a free OS. In promoting the name `GNU-slash-Linux' over simply `Linux', his aim is to not only give credit to the authors of the GNU software which makes Linux possible, but also to raise awareness of the philosophy of the GNU Project, perhaps causing users to think about the value of freedom and maybe even inspiring them to defend the free software community when it is endangered.

    With that, Stallman opened the floor to questions, the first one being whether he considers any circumstance legitimate justification to write or sell proprietary software. Stallman answered with a succinct "no," but pointed out that 90% of the software industry is about developing custom software ("people don't load sloughware into a microwave"). A guy sitting in front of me asked how programmers would get paid if all software were free. Stallman said that getting paid should be considered secondary to the more important issue of "will people have freedom?" Once that is taken care of, programmers can find new ways to earn a living, e.g. get paid to write free software by companies like Red Hat, or sell copies/support/documentation for free software like GNU.

    Someone posed the fundamental question, "Is it ethical to redistribute something that you're not allowed to redistribute?" Stallman replied simply that the lesser of the two evils is to share with your friend. The audience responded with a thunderous ovation. He went on to say that there is a "war against journals" currently being waged in academia. To fight scientific journals that claim sole rights to the articles they publish, Stallman urged us to include the statement "Permission is granted for verbatim copying of this work" on any articles we submit for publication.

    At this point Stallman took an extended break to sell GNU manuals, give away stickers, and talk one-on-one with audience members. Of the audience of about 200 people, dozens purchased Emacs and Make manuals which rms patiently signed with his customary "Happy Hacking." (He was noticeably quick to point out the "cheapskates" who asked for signatures on the free FSF brochures that were also being distributed.) This was followed by a final Q & A for the thirty or forty hardcore hackers who had stuck around.

    Someone made a comment about linking closed source objects into Linux. Stallman said that Linus made a big mistake when he allowed this to happen. There was a brief discussion of the "Look and Feel" lawsuit which apparently resulted in a tie vote in the US Supreme Court. Since then, industry seems to have lost interest in pursuing it. Stallman, of course, was opposed to the idea of copyrighting an interface. Someone asked the obligatory question about the state of the GNU Hurd. He claimed that there is a working version, but that they haven't yet taken full advantage of the architecture, and that no one is currently working on it full-time. (Seems like the perfect opportunity for a comp sci PhD thesis.) Near the end, a sincere-sounding chap thanked rms for Emacs, and said that, in the 80's, he used to spend a lot of time staring at an Emacs window. Stallman countered, "Does that mean you don't anymore? Emacs misses you. Emacs needs you!" Hee-hee! Great stuff!

    It's after 6:30 by this time and I'm getting hungry, not to mention I was supposed to meet the friends I'm staying with at 6:00 (sorry Trish!). Still, I'm hoping to work up the nerve to talk to Stallman and maybe get a picture with him. Just when I'm thinking of taking off, the questions die out and Stallman wraps up the Q & A. Some more people are getting him to sign manuals, so I wait for an opening and ask him if I could make a personal donation to him and not the FSF in appreciation for creating GNU Emacs. He agrees (!) so I whip out $20 and get Ryan to take a couple pictures of this historic transaction. Woo-hoo! My trip is now officially a success! I quickly say goodbye to the guys from SFU and dash off to meet my friends at the bookstore.


    O'Reilly

    Thanks to the overcrowded Vancouver transit system, I arrive about 15 minutes into Tim O'Reilly's Friday morning talk on "Linux and Open Source Business Models." As it turns out, I don't think I missed much. O'Reilly's talk seems to be somewhat disorganized - a series of loosely-connected thoughts and stories about the software industry (here are my notes). His main point seems to be that the open source and the web are revolutionizing the software business (newsflash!) but when someone asks him about open sourcing his company's publications, he claims that his hands are tied by authors' royalty demands. He goes on to say that he wants to maximize the amount of useful information his books provide. Seems to me the best way to do that is to allow free access to their contents...

    His words about not thinking in the `old frame' and adapting to the 'new paradigm' ring hollow considering that O'Reilly & Associates continues to follow a traditional print publication business model. Why not try something truly innovative like selling online access to his books at a reduced price? He ends his talk by imploring the audience to use the new era of Internet and open source to "find a way for people to want to give you money." Not exactly "Ask not what your country can do for you..." as far as inspirational messages go... Afterwards a VanLUG guy mentions that it's O'Reilly day at the University Bookstore (20% off all O'Reilly books), and O'Reilly plugs a new book of UserFriendly cartoons that's coming out soon.

    Unlike the Stallman lecture, there seems to be much less of a hacker presence, somewhat understandably since this was a talk about business models. After the moral conviction of Stallman's words yesterday, the things O'Reilly had to say about the new frontier in the software industry paled in comparison. Freedom is something you can laugh, cry, or shake your fist in the air about. And the heart and soul of GNU is a belief in helping others. In comparison, the business of making money is a cold, logical affair that's not very conducive to exciting peoples' passions. After the talk hardly anyone in the audience of about 70 or 80 rushes the stage to talk to O'Reilly like they did yesterday with rms.


    Conclusion

    When it's all over I shuffle off to the bookstore to check out the O'Reilly's. But before I get there I've already made up my mind not to buy anything. Stallman got to me. I can't buy another "animal book" in good conscience, at least not until I give it some serious thought. It's just as well - Dynamic HTML: the Definitive Reference is selling for $57. Even with the 20% discount, that's more than I'm willing to pay for information that I can probably find for free on the web. Granted it might not come in the form of a nicely bound softcover that I can peruse whilst sitting on the john... I guess that's what Stallman meant yesterday when he was talking about sacrificing convenience for freedom. With that thought in mind I hop on a bus and start the long journey home...

    finis

    Special Thanks to

    Editor's note

    If you didn't follow the links to Hayashi's notes above, they are definitely worth a read. Here are the links again:


    Copyleft © 1999, Eric Hayashi
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Setting Up a Linux Server Network

    By Alex Heizer


    Abstract: A small business acquires a stable network by installing Linux on their servers.

    The computer systems of a small business often leave something to be desired, especially in New Jersey. Sometimes a collection of off-the-shelf PCs running generic office applications gets the job done, but they're not the most efficient way of doing it.

    One of the main obstacles to upgrading any business' computers is when the employees become dependent on one system, program or way of doing things. The thought of making any change, however minor, often strikes holy terror into the heart of any boss or company owner. Cindy Wallace, owner of Binding Specialties in Camden, New Jersey, is no exception. After discussing our options, Cindy and I decided the best route to take for getting the company's computers networked was a Linux-based file server. This would allow transparent access to important files from every workstation in the office, with user-level security for important confidential data. The biggest change in this type of setup would be each user having to log on to their computer instead of just accepting the default generic desktop. Using Linux would also save quite a bit of money, because even a five-user license for Intel-architecture server software from that other software company can cost up to $1000, without a mail or web server. Although we have only a few people using the computers, this would be limiting from day one and would waste more money as the company expanded. Another important consideration was ease of administration, since I spend much of my time in the shop working on production.

    Hardware consists of five x86-based PCs, the least powerful of which is a Pentium-133 with 8MB of RAM. We decided to keep the faster machines for workstations, since a P133 with 8MB RAM was sufficient for a Linux server in a network of this size. The other machines are a 450MHz Celeron-based HP Pavilion, a 366MHz Celeron Dell Inspiron notebook and a few Pentium II-based custom-built boxes. All four of these machines came pre-installed with GUI-based operating systems from a software vendor near Seattle, Washington. We figured integrating these computers would be easy using Samba.

    We quickly purchased the additional hardware we needed, including NICs, cables, LAN hub, UPS and a new 13GB hard drive for the server, since the existing hard drive had less than 600MB capacity. This would ensure adequate storage space for all company files. The next step was installing Linux and configuring everything. For this, I chose the new Caldera OpenLinux distribution. I originally planned to use Slackware 3.5, since I was most familiar with it, and wanted to get up and running as quickly as possible. However, having recently found the setup of Caldera on a personal machine to be quite easy and still in possession of the CD-ROM, I decided to opt for the up-to-date kernel and programs that come with OpenLinux. Cindy was happy she didn't have to shell out any money for the OS.

    The installation was tricky because the graphical installation program requires 32MB of system RAM, but it went fine with a temporary RAM transplant. Unlike our workstation operating systems, each of which took several tries to recognize the NICs, Linux correctly identified all installed hardware the first time through. The only problem occurred when rebooting the system, because OpenLinux is set to start KDE on bootup--it took forever on 8MB of RAM.

    Once the server was up and running, it was a simple matter of going around to each of the workstations and setting them up for networking. After checking the numerous boxes in all the endless tabs and filling in all fields, each workstation was configured. Setting up the server with OpenLinux required filling in an IP address, gateway address and domain name during the installation, then uncommenting lines in smb.conf, the Samba configuration file. This was easy, which surprised me, considering Linux is well-known for being hard to install and set up. One problem I had with the workstations was the OS released in 1998 requires encrypted passwords, while the 1995 version uses plaintext passwords. When Samba was first configured, the 1995 computers interfaced perfectly with the '98s had trouble logging in to the server until I uncommented a few more lines in smb.conf. Of course, there was no mention of this difference in any of the workstation's documentation, on-line help or troubleshooting guides. We feel that using a more homogeneous collection of operating systems would have simplified things a bit more, but that would have to wait for more commercial applications to be released for Linux.

    The next step was setting up user accounts on each computer for everyone in the office, putting all the shared data onto the server, and linking appropriate shortcuts to that data on each workstation. Database files for financial and contact managing applications, as well as spreadsheets, letters, artwork and essential job information are all stored on the server for each workstation to access. This frees up valuable disk space on the workstations that can now be used for installation of important games. Backups of all critical data are easily done with a single Colorado Trakker tape drive. This leaves the IOmega Zip drive free for storage of MP3 clips and graphics downloaded from the Internet when no one is looking.

    Some people feel Linux is not yet ready for the desktop. My office is probably typical in that the people who use the computers on a daily basis do so because they have a job to do, and pencil and paper is more of a hassle and clutters up their desk. The applications they need and know are available for other operating systems, and it would take more time and effort than it is worth for them to convert to Linux-compatible programs. However, using off-the-shelf computers as personal workstations, no matter how confusing, confounding or questionable in the reliability department, with a Linux-based back end makes a reliable, cost-effective, familiar, easy-to-use network. Cindy is very impressed with the power and features of Linux, and we await the day when some of the vendor-specific software she currently uses will support this stable desktop.

    In the future, Binding Specialites is planning to get on the Web with their own domain and go live with an Apache web server. Also on the horizon for us is having dial-in, TELNET, FTP and POP service run from the server. Cindy is excited about not having to buy an extra server program.

    Binding Specialties' new network is a great example of a small business setup that benefits not only from Linux' power, flexibility and reliability, but also from its economical bottom line. With the ease of installation and setup of the newer distributions, there aren't any excuses for small businesses not to have a reliable network if they need one, even if their current corral is limited to standard GUI-windowed x86 PCs.


    Copyright © 1999, Alex Heizer
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Securing Linux: The First Steps

    By Peter Lukas


    Not too long ago, I sat patiently while the latest kernel version trickled down my slow, analog dial-up connection. Throughout the entire process, I longed for the day when high-speed Internet access would be available in the home. The arrival of xDSL and cable modems to the doorstep has made this dream a reality, but not without its price.

    As I write this, somewhere in the world, someone is setting up a Linux distribution on their home computer for the first time. The new Linux administrator takes the system for a spin by firing up accounts for family and friends. Just a few short hours after the initial installation, this new Linux system is an Internet presence thanks to its high-speed DSL connection.

    It Is Also a Sitting Duck

    Nearly all Linux distributions available today are insecure right out of the box. Many of these security holes can be easily plugged, but tradition and habit have left them wide open. A typical Linux installation boots for the first time offering a variety of exploitable services like SHELL, IMAP and POP3. These services are often used as points of entry for rogue netizens who then use the machine for their needs, not yours. This isn't just limited to Linux--even the most sophisticated commercial UNIX flavors ship with these services and more running right out of the box.

    Without assessing blame or pointing fingers, it is more important that these new machines become locked down (hardened, to pin a technical term to it). Believe it or not, it doesn't take an expert in system security to harden a Linux machine. In fact, you can protect yourself from 90 percent of intrusions in less than five minutes.

    Getting Started

    To begin the process of hardening your machine, ask yourself what role your machine will play and how comfortable you are with connecting it to the Internet. Carefully decide which services you want to make available to the rest of the world. If you are unsure, it's best not to run any. Most importantly, create a security policy for yourself. Decide what is and what is not acceptable use of your system.

    For purposes of this article, the example machine is a workstation that will be used for typical Internet access such as mail and news reading, web browsing, etc.

    Securing Network Services

    First, gain superuser (root) access to the system and take an inventory of its current network state by using the netstat command (part of net-tools and standard on most Linux systems). An example of its ouput is shown here:

    root@percy / ]# netstat -a
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address   Foreign Address         State
    tcp        0      0 *:imap2                 *:*             LISTEN
    tcp        0      0 *:pop-3                 *:*             LISTEN
    tcp        0      0 *:linuxconf             *:*             LISTEN  
    tcp        0      0 *:auth                  *:*             LISTEN  
    tcp        0      0 *:finger                *:*             LISTEN  
    tcp        0      0 *:login                 *:*             LISTEN  
    tcp        0      0 *:shell                 *:*             LISTEN  
    tcp        0      0 *:telnet                *:*             LISTEN  
    tcp        0      0 *:ftp                   *:*             LISTEN  
    tcp        0      0 *:6000                  *:*             LISTEN  
    udp        0      0 *:ntalk                 *:*                     
    udp        0      0 *:talk                  *:*                    
    udp        0      0 *:xdmcp                 *:*                     
    raw        0      0 *:icmp                  *:*             7       
    raw        0      0 *:tcp                   *:*             7
    
    As you can see from that output, a fresh installation left a number of services open to anyone within earshot. Most of these services are known troublemakers and can be disabled in the configuration file, /etc/inetd.conf.

    Open the file with your favorite text editor and begin to comment out any services you do not want. To do this, simply add a ``#'' to the beginning of the line containing the service. In this example, the entire file would be commented out. Of course, should you decide at some point that you would like to offer some of these services, you are free to do so.

    Now, restart inetd to reflect the changes. This can be done in a number of ways and can differ from system to system. A simple

    killall -HUP inetd
    
    should do the trick. Check the open sockets again with netstat and note the changes.

    Next, take a look at what processes are running. In most cases, you'll see things like sendmail, lpd and snmpd waiting for connections. Because this machine will not be responsible for any of these services, they will all be turned off.

    In most cases, these services are launched from the system initialization scripts. These can vary somewhat from distribution to distribution, but they are most commonly found in /etc/init.d or /etc/rc.d. Consult the documentation for your distribution if you are unsure. The goal is to prevent the scripts from starting these services at boot time.

    If your Linux distribution uses a packaging system, take the time to remove the services you do not want or need. On this example machine, those would be sendmail, any of the ``r'' services (rwho, rwall, etc), lpd, ucd-snmp and Apache. This is a much easier approach and will ensure the services aren't activated accidentally.

    Securing X

    Most recent distributions enable machines to boot for the first time into an X Window System login manager like xdm. Unfortunately, that too is subject to exploits. By default, the machine will allow any host to request a login window. Since this machine has only one user that logs into the console directly, that feature will need to be disabled as well.

    The configuration file for this varies depending on which version of the login manager you are using. This machine is running xdm, so the /usr/X11R6/lib/X11/Xaccess file will need to be edited. Again, add a ``#'' to prevent the services from starting. My Xaccess file looks like this:

    #* #any host can get a login window
    #* #any indirect host can get a chooser
    
    The changes will take effect when xdm restarts.

    Software Updates

    Now that some of the basic hardening has been done, it is necessary to check with the vendor for updates and enhancements to the distribution. Poor maintenance or none at all is another large contributor to system compromises.

    One of the blessings of open-source software is that it is constantly under development. Security vulnerabilities are often discovered by a number of people, and a fix is available within days, if not hours of its discovery. As a result, most vendors actively maintain their Linux distribution. Quite often, they post updates, bug fixes and security advisories on their web site. Make a daily or weekly visit to your vendor's site and apply any patches or updates they post.

    The Next Step

    By this point, the machine is far more secure than when it was first installed. It isn't invulnerable to attack, but at least it is no longer extending an invitation to attackers. The approach outlined here is similar to that of locking your home or car. The average thief will jiggle the handle, realize that it's locked and move on to one that isn't.

    Should you decide these steps do not provide enough security, or you wish to provide some network services across the Internet, take the time to research some advanced security techniques before you do so.

    Unfortunately, vendors of most Linux distributions assume their customers already know about these services and want to use them. This isn't always the case for newcomers. Of course, there is still a large amount of ground to cover before total Linux system security can be achieved, but these steps provide a basic foundation and awareness of system security.

    To date, the majority of system and network compromises are relatively minor. As Linux increases in popularity and high-speed Internet access becomes more available, attacks on unprepared Linux systems will only become more severe and abundant.


    Copyright © 1999, Peter Lukas
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Programming in Dino

    By Vladimir N. Makarov



    Dino is a high-level, dynamic scripting language that has been designed for simplicity, uniformity, and expressiveness. Dino is similar to such well known scripting languages as Perl, TCL, and Python. As most programmers know the C language, Dino resembles C where possible.

    Dino is an extensible, object oriented language that has garbage collection. It supports parallelism description, exception handling, and dynamic loading of libraries written on other languages. Although Dino is a multiplatform language, its main platform is Linux.

    This document is a concise introduction to the new Dino scripting language, but is not a programmer's manual.


    1. Some History

    Originally, Dino was designed and implemented by the Russian graphics company ANIMATEK to describe the movement of dinosaurs in an animation project. (This is origin of the language's name.) At that time it worked in only 64Kb memory. It has been considerably redesigned and reimplemented with the aid of the COCOM toolset.

    2. Let's Begin

    The best way to get the feel of a programming language is to see a program written in it. Because I have worked in the compiler field for the last 18 years, I'll write a small assembler in Dino.

    Most of us do not remember how programmers wrote programs for old computers that had only a few kilobytes of memory. Long ago I read about an Algol 60 compiler that worked on a computer that had only 420 20-bits words. In an old book "Compiler Construction for Digital Computers", Gries describes an Algol compiler working on 1024 42-bits words. How did they achieve this? One of the ways is to use an interpreter for a specialized language; a program in a specialized language is usually smaller. Let's implement an assembler for syntactic parsers. The assembler output will be a syntactic parser interpreter in C. The assembler instructions have the following format:

    [label:] [code [operand]]
    Here, the constructions in brackets are optional. For convenience we will allow comments that start with ; and finish at the end of the line.

    The assembler will have the following instructions:

    goto label Transfer control to the instruction marked label.
    gosub label Transfer control to the instruction marked label and save the next instruction address.
    return Transfer control to the instruction following the latest executed gosub instruction.
    skipif symbol If the current token is symbol, the following instruction is skipped. Otherwise, transfer control to the following instruction.
    match symbol The current token should be symbol, otherwise a syntax error is set. After matching, the next token is read and become the current token.
    next The next token is read and become the current token.

    The following assembler fragment recognizes Pascal designators.

    ;
    ; Designator = Ident { "." Ident | "[" { expr / ","} "]" | "@" }
    ;
    start:
    Designator:
            match   Ident
    point:  skipif  Point
            goto    lbrack
            next    ; skip .
            match   Ident
            goto    point
    lbrack: skipif  LBracket
            goto    at
            next    ; skip [
    next:   gosub   expr
            skipif  Comma
            goto    rbrack
            next    ; skip ,
            goto    next
    rbrack: match   RBracket
            goto    point
    at:     skipif  At
            return
            next    ; skip @
            goto    point

    2.1. Overall structure of the assembler.

    As a rule, assemblers work in two passes. Therefore, we need to have some internal representation (IR) to store the program between the passes. We will create the following Dino files:

    These files are described in detail below.

    2.2. File ir.d

    This file contains the description of the IR in Dino and also some auxiliary functions. Dino has dynamic variables. In other words, a Dino variable may contain a value of any Dino type. The major Dino types are: The values of the last three types are shared. That means that if a variable value is assigned to another variable, any changes to the shared value through the first variable are reflected in the value of the second variable. In general, working with shared values is analogous to working with pointers in C, but with fewer risks.

    Line 1 describes an abstract node of an IR. A node of such class has the variable lno (which is the source line of the corresponding assembler instruction). The variable is also a class parameter. That means that you should define its value when creating a class instance or object (see line 7). Inside class irn, classes describing each assembler instruction are defined. Each Dino object (and not only objects) stores information about its context. So if you create an object of class next (see line 40 in file input.d) by calling a class that is accessed through an object of class irn, and then you take value of the variable lno through the object of the class next, you actually take the value of the variable of the object of the class irn. This is a more simple and more general mechanism of implementing a single inheritance.

    An object of the class ir (lines 9-13) contains information about the entire program:

     1. class irn (lno) {
     2.   class goto (lab)     {}     class skipif (sym)    {}
     3.   class match (sym)    {}     class gosub (lab)     {}
     4.   class next ()        {}     class ret ()          {}
     5. }
     6.
     7. var an = irn (0);
     8. 
     9. class ir () {
    10.   // all ir nodes, label->node index, node index -> vector of labels.
    11.   var ns = [], l2i = {}, i2l = {};
    12.   var ss = {}, mind, maxd;
    13. }
    14.
    15. func err (...) {
    16.   var i;
    17.   fput (stderr, argv[0], ": ");
    18.   for (i = 0; i ? #args; i++)
    19.     if (args[i] != nil)
    20.       fput (stderr, args[i]);
    21.   fputln (stderr);
    22.   exit (1);
    23. }
    24.
    25. func tab2vect (tab) {
    26.   var i, vect = [#tab:""];
    27.   for (i in tab)
    28.     vect [tab {i}] = i;
    29.   return vect;
    30. }
    Lines 15-23 contain a function to output errors. The function accepts a variable number of parameters whose values will be elements of the vector in the implicitly defined variable args. Any function or class can be called with any number of actual parameters. If the number of formal parameters is more than the number of actual parameters, the rest of formal parameters will have the default value nil. If the number of actual parameters is more than the number of formal parameters, the rest of the actual parameters will be ignored unless the last formal parameter is "...".

    The other elements are:

    There are many other predefined functions, classes, and variables in Dino. On line 18 you can see the operator #, which returns the number of elements in a vector or an associative table.

    Lines 25-30 contain a function that transforms a table into a vector. The table's keys are a sequence of integers that start with 0. The result is a vector whose elements are the table elements placed in the vector according their keys. First we create a vector of the table size and initialize it with empty strings (line 26). Then we fill each element of the vector, iterating by the keys of the table (lines 27-28).

    2.3. File input.d

    This file contains the function get_ir, which reads the file given as its parameter, performs some checks on the file, and generates the IR of the source program.

    The first line contains an include-clause that specifies a source file without the suffix .d (all Dino source files should have this suffix). The file is given as a string in the clause; this means that the entire file is inserted in place of the clause. As result, we could check the file by calling the Dino interpreter with input.d on a command line. There are several rules that define which directories are searched for the included file. One such directory is the directory of the file that contains the include-clause. Thus, we can place all the assembler files in that one directory and forget about the other rules.

    The file is inserted only once in a given block (this is the construction that starts with { and finishes with }). This is important for our program because each file will contain an inclusion of the file ir.d, and eventually all the files will be included into the main program file. Unconditional inclusion in this case would result in many error messages about repeated definitions. By the way, there is also special form of the include-clause that permits unconditional file inclusion.

    On lines 6-13 we define some variables. We use regular expressions to assign them strings that describe the correct assembler lines. The regular expressions are extended regular expressions that are described in POSIX 10003.2. To concatenate the strings (vectors), we use the operator @.

    Lines 16 to 52, 53 form a try-block that is used to process exceptional situations in the Dino program. The Dino interpreter can generate a lot of predefined exceptions. A Dino programmer can also describe and generate other exceptions. The exceptions are objects of the predefined class except, or they are objects of a class defined inside the class except. Dino has special constructions (extension blocks) to add something into a class, and functions when the class or the function is already defined. In our example, the exception we catch is "reaching the end of a file", which is generated by the predefined function fgetln (reading a new line from a file). If we do not catch the exception, the program finishes with a diagnostic about reaching the end of the file. In the clause catch, we write a class of exceptions that we want to catch. The value of the predefined variable invcalls is the class invcall, in which class eof is defined. In turn, the class invcall is defined inside the class except. If the exception is of a class given in the catch-clause or of a class defined somewhere inside a class given in the catch-clause, a block corresponding to the catch-clause is executed. The variable e is implicitly defined in the block that contains the exception. The exception is propagated further, unless the catch-clause corresponding to the exception is found.

    The predefined function fgetln returns the next line from the file. After this, the line is matched with the pattern on line 20. The predefined function match returns the value nil if the input line does not correspond to the pattern, otherwise it returns a vector of integer pairs. The first pair is the first and the last character indexes in the line. The first pair defines the substring that corresponds to the whole pattern. The following pairs of indexes correspond to constructions in parentheses in the pattern. They define substrings that are matched to the constructions in the parentheses. If a construction is not matched (for example, because an alternative is rejected), the indexes have the value -1.

    The statement on line 23 extracts a label. The predefined function subv is used to extract the sub-vectors (sub-strings).

    On lines 24 and 25, we use an empty vector to initialize a table element that corresponds to the current assembler instruction. On lines 26-31, we process a label, if it is defined on the line. On lines 27-28, we check that the label is not defined repeatedly. On line 29, we define how to map the label name into number of the assembler instruction to which the label is bound. We make that mapping with the aid of associative table pr.l2i. On line 30, we add the label name to the vector that is the element of associative table pr.i2l that has a key equal to the number of the assembler instruction. Predefined function ins (insertion of element into vector) is used with index -1, which means addition of the element at the vector end. Dino has extensible vectors. There are also predefined functions to delete elements in vectors (and associative tables).

     1. include "ir";
     2.
     3. func get_ir (f) {
     4.   var ln, lno = 0, code, lab, op, v;
     5.   // Patterns
     6.   var p_sp = "[ \t]*";
     7.   var p_code = p_sp @ "(goto|skipif|gosub|match|return|next)";
     8.   var p_id = "[a-zA-Z][a-zA-Z0-9]*";
     9.   var p_lab = p_sp @ "((" @ p_id @ "):)?";
    10.   var p_str = "\"[^\"]*\"";
    11.   var p_op = p_sp @ "(" @ p_id @ "|" @ p_str @ ")?";
    12.   var p_comment = p_sp @ "(;.*)?";
    13.   var pattern = "^" @ p_lab @ "(" @ p_code @ p_op @ ")?" @ p_comment @ "$";
    14.
    15.   var pr = ir ();
    16.   try {
    17.     for (;;) {
    18.       ln = fgetln (f);
    19.       lno++;
    20.       v = match (pattern, ln);
    21.       if (v == nil)
    22.         err ("syntax error on line ", lno);
    23.       lab = (v[4] >= 0 ? subv (ln, v[4], v[5] - v[4]) : nil);
    24.       if (!(#pr.ns in pr.i2l))
    25.         pr.i2l {#pr.ns} = [];
    26.       if (lab != nil) {
    27.         if (lab in pr.l2i)
    28.           err ("redefinition lab ", lab, " on line ", lno);
    29.         pr.l2i {lab} = #pr.ns;
    30.         ins (pr.i2l {#pr.ns}, lab, -1);
    31.       }
    32.       code = (v[8] >= 0 ? subv (ln, v[8], v[9] - v[8]) : nil);
    33.       if (code == nil)
    34.         continue;  // skip comment or absent code
    35.       op = (v[10] >= 0 ? subv (ln, v[10], v[11] - v[10]) : nil);
    36.       var node = irn (lno);
    37.       if (code == "goto" || code == "gosub") {
    38.         if (op == nil || match (p_id, op) == nil)
    39.           err ("invalid or absent lab `", op, "' on line ", lno);
    40.         node = (code == "goto" ? node.goto (op) :  node.gosub (op));
    41.       } else if (code == "skipif" || code == "match") {
    42.         if (op == nil || match (p_id, op) == nil)
    43.           err ("invalid or absent name `", op, "' on line ", lno);
    44.         node = (code == "skipif" ? node.skipif (op) : node.match (op));
    45.       } else if (code == "return" || code == "next") {
    46.         if (op != nil)
    47.           err (" non empty operand `", op, "' on line ", lno);
    48.         node = (code == "next" ? node.next (op) : node.ret ());
    49.       }
    50.       ins (pr.ns, node, -1);
    51.     }
    52.   } catch (invcalls.eof) {
    53.   }
    54.   return pr;
    55. }
    On lines 36-49 we check the current assembler instruction and create the corresponding IR node (an object of a class inside the class ir -- see file ir.d). And finally, we insert the node at the end of the vector pr.ns (line 50).

    2.4. File check.d

    After processing all assembler instructions in the file input.d, we can check that all labels are defined (lines 7-9) and we can evaluate the maximum and minimum displacements of goto and gosub instructions from the corresponding label definition (lines 10-13). The function check makes this work. It also forms an associative table pr.ss of all symbols given in the instructions match and skipif, and enumerates the symbols (lines 16-17). Here the function inside (lines 6 and 14) is used to define that an object is of a given class, or of a class defined somewhere in a given class.
     1. include "ir";
     2.
     3. func check (pr) {
     4.   var i;
     5.   for (i = 0; i ? #pr.ns; i++) {
     6.     if (inside (pr.ns[i], an.goto) || inside (pr.ns[i], an.gosub)) {
     7.       if (!(pr.ns[i].lab in pr.l2i))
     8.         err ("undefined label `", pr.ns[i].lab, "' on line ",
     9.              pr.ns[i].lno);
    10.       if (pr.maxd == nil || pr.maxd ? pr.l2i {pr.ns[i].lab} - i)
    11.         pr.maxd = pr.l2i {pr.ns[i].lab} - i;
    12.       if (pr.mind == nil || pr.mind > pr.l2i {pr.ns[i].lab} - i)
    13.         pr.mind = pr.l2i {pr.ns[i].lab} - i;
    14.     } else if (inside (pr.ns[i], an.match)
    15.                || inside (pr.ns[i], an.skipif)) {
    16.       if (!(pr.ns[i].sym in pr.ss))
    17.         pr.ss {pr.ns[i].sym} = #pr.ss;
    18.     }
    19.   }
    20. }

    2.5. File gen.d

    The biggest assembler source file is the interpreter generator. We generates two files: a .h file (the interface of the interpreter) and a .c file (the interpreter itself). We create the files on line 4. The parameter bname of the function gen is a base name of the generated files. The interface file contains definitions of codes of tokens in match and skipif instructions as C macros (lines 6-9) and definition of function yyparse (line 35). Function yyparse is a main interpreter function. It returns 0 if the source program is correct, and nonzero otherwise.

    The generated interpreter requires the external functions yylex and yyerror (line 34). The function yylex is used by the interpreter to read and to get the code of the current token. Function yyerror should output error diagnostics. (The interface is a simplified version of the Yacc Unix Utility interface.)

    The compiled assembler program is presented by a C array of chars or short integers with the name program. Each element of the array is an encoded instruction of the source program. On lines 11-15, we evaluate the start code for each kind of assembler instruction and define the type of array elements. On lines 16-33, we output the array program. On lines 37-61, we output the function yyparse. Finally, on lines 62-63 we close the two output files with the aid of the predefined function close.

     1. include "ir";
     2. 
     3. func gen (pr, bname) {
     4.   var h = open (bname @ ".h", "w"), c = open (bname @ ".c", "w");
     5.   var i, vect;
     6.   vect = tab2vect (pr.ss);
     7.   for (i = 0; i ? #vect; i++)
     8.     fputln (h, "#define ", vect[i], "\t", i + 1);
     9.   fputln (h);
    10.   fputln (c, "#include \"", bname, ".h\"\n\n");
    11.   var match_start = 3, skipif_start = match_start + #pr.ss,
    12.       goto_start = skipif_start + #pr.ss,
    13.       gosub_start = goto_start + (pr.maxd - pr.mind) + 1,
    14.       max_code = gosub_start + (pr.maxd - pr.mind);
    15.   var t = (max_code ? 256 ? "unsigned char" : "unsigned short");
    16.   fputln (c, "\nstatic ", t, " program [] = {");
    17.   for (i = 0; i ? #pr.ns; i++) {
    18.     if (inside (pr.ns[i], an.goto))
    19.       fput (c, " ", goto_start + pr.l2i{pr.ns[i].lab} - i - pr.mind, ",");
    20.     else if (inside (pr.ns[i], an.match))
    21.       fput (c, " ", match_start + pr.ss{pr.ns[i].sym}, ",");
    22.     else if (inside (pr.ns[i], an.next))
    23.       fput (c, " 1,");
    24.     else if (inside (pr.ns[i], an.ret))
    25.       fput (c, " 2,");
    26.     else if (inside (pr.ns[i], an.skipif))
    27.       fput (c, " ", skipif_start + pr.ss{pr.ns[i].sym}, ",");
    28.     else if (inside (pr.ns[i], an.gosub))
    29.       fput (c, " ", gosub_start + pr.l2i{pr.ns[i].lab} - i - pr.mind, ",");
    30.     if ((i + 1) % 20 == 0)
    31.       fputln (c);
    32.   }
    33.   fputln (c, " 0, 0\n};\n\n");
    34.   fputln (h, "extern int yylex ();\nextern int yyerror ();\n");
    35.   fputln (h, "\nextern int yyparse ();\n");
    36.   fputln (h, "#ifndef YYSTACK_SIZE\n#define YYSTACK_SIZE 50\n#endif");
    37.   fputln (c, "\nint yyparse () {\n  int yychar=yylex (), pc=0, code;\n  ",
    38.           t, " call_stack [YYSTACK_SIZE];\n  ", t, " *free=call_stack;");
    39.   fputln (c, "\n  *free++=sizeof (program) / sizeof (program [0]) - 1;");
    40.   fputln (c, "  while ((code=program [pc]) != 0 ?? yychar > 0) {");
    41.   fputln (c, "    pc++;\n    if (code == 1)\n      yychar=yylex ();");
    42.   fputln (c, "    else if (code == 2) /*return*/\n      pc=*--free;");
    43.   fputln (c, "    else if ((code -= 2) ? ", #pr.ss, ") {/*match*/");
    44.   fputln (c, "      if (yychar == code)\n        pc++;\n      else {");
    45.   fputln (c, "        yyerror (\"Syntax error\");");
    46.   fputln (c, "        return 1;\n      }");
    47.   fputln (c, "    } else if ((code -= ", #pr.ss, ") ? ", #pr.ss, ") {");
    48.   fputln (c, "      if (yychar == code)\n        pc++; /*skipif*/");
    49.   fputln (c, "    } else if ((code -= ", #pr.ss, ") ?= ", pr.maxd-pr.mind,
    50.           ") /*goto*/\n      pc += code + ", pr.mind, ";");
    51.   fputln (c, "    else if ((code -= ", pr.maxd - pr.mind + 1, ") ?= ",
    52.           pr.maxd - pr.mind, ") { /*gosub*/");
    53.   fputln (c, "      if (free >= call_stack + YYSTACK_SIZE) {");
    54.   fputln (c, "        yyerror (\"Call stack overflow\");");
    55.   fputln (c, "        return 1;\n      }\n      pc += code + ", pr.mind,
    56.       ";\n      *free++=pc;\n    } else {");
    57.   fputln (c, "      yyerror(\"Internal error\");\n      return 1;\n    }");
    58.   fputln (c, "  }\n  if (code != 0 || yychar > 0) {");
    59.   fputln (c, "    if (code != 0)\n      yyerror (\"Unexpected EOF\");");
    60.   fputln (c, "    else\n      yyerror(\"Garbage after end of program\");");
    61.   fputln (c, "    return 1;\n  }\n  return 0;\n}");
    62.   close (h);
    63.   close (c);
    64. }

    2.6. File sas.d

    This is the main assembler file. Lines 1-4 are include-clauses for the inclusion of the previous files. Line 6-7 checks that the argument is given on the command line. On line 9 we open the file given on the command line, and call the function for reading and generating the IR of the program. If the file does not exist or cannot be opened for reading, an exception is generated. The exception results in the output of standard diagnostics and finishes the program. We could catch the exception and do something else, but the standard diagnostics will be sufficient here. On line 10, we check the IR. And finally on line 11, we generate the interpreter of the program. To get the base name of the assembler file, we use the predefined function sub, deleting all directories and suffixes from the file name and returning the result.
     1. include "ir";
     2. include "input";
     3. include "check";
     4. include "gen";
     5.
     6. if (#argv != 1)
     7.   err ("Usage: sas file");
     8.
     9. var pr = get_ir (open (argv[0], "r"));
    10. check (pr);
    11. gen (pr, sub ("^(.*/)?([^.]*)(\\..*)?$", argv[0], "\\2"));

    2.7. Results

    So we've written the assembler (this is 200 lines in Dino). As a test, we will use Oberon-2 language grammar. You can look at Oberon-2 parser in the file oberon2.sas. After we will get two files oberon2.h and oberon2.c. Let's look at the size of generated IAX-32 code:
        gcc -c -Os oberon2.c; size oberon2.o
    
        text        data    bss     dec     hex     filename
        382         934     0       1316    524     oberon2.o
    For comparison, we would have about 15Kb for a YACC generated parser. Not bad. But we could make the parser even less than 1Kb by using short and long goto and gosub instructions. Actually, what we generate is not a parser, it is only a recognizer. But the assembler language could be easily extended to write parsers. Just add the instructions:
       call C-function-name
    to call semantic functions for the generation of parsed code. In any case, most of a compiler's code would be in C. To further decrease the compiler size (not only its parser), an interpreter of C that is specialized to the compiler could be written.

    Of course, it is not easy work to write a parser on the assembler. So we could generate assembler code from a high-level syntax description, for example, from a Backus-Naur form. Another area for improvement is the implementation of error recovery, but this was not our purpose. Our goal was just to demonstrate the Dino language.

    3. Some last comments

    What Dino's features were missed in this introduction? Many details, of course, but we also have not described the following major parts of Dino language: The Dino interpreter is distributed under GNU Public license. You can find it on

    http://www.linuxstart.com/~vladimir_makarov/dinoload.html
    http://www.freespeech.org/vmakarov/dinoload.html

    Special thanks to Michael Behm (a member of the Cygnus documentation group) for his help in editing this article.


    Copyright © 1999, Vladimir N. Makarov
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Chez Marcel

    By Marty McGowan


    Marcel Gagné's article (Linux Journal #65, September 1999) on French cooking inspired me to share some recipes of my own. The cooking metaphor is not new to computing, as Donald Knuth, in his forward to "Fundamental Algorithms" confesses he almost used "Recipes for the Computer" as its title. Without stirring the metaphor too vigorously, Gagné's article gives me the opportunity to share two items of interest and give them the needed cooking flavor.

    For some time, I've been concerned about what I regard are overuse or misuse of two programming constructs:

    To continue the cooking analogy, these two may be thought of respectively as inconsistent or lumpy sauce, and uneven temperature. Realizing that we chefs like to work on one another's recipes, lets see what happens when we apply them to Marcel Gagné's recipe, "Check User Mail".

    Before I'd read Marcel's article, my style of programming used the tool metaphor. While not much of a chef, I now prefer the cooking metaphor, as it connotes more of a learning, and sharing model, which is what we do in programming.

    Marcel's recipe is an excellent starting point for my school of cooking, as his recipe is complete, all by itself, and offers the opportunity to visit each of the points once. First, here is a copy of his recipe, without the comment header.

    for user_name in 'cat /usr/local/etc/mail_notify'
    do
    	no_messages='frm $user_name |
    		grep -v "Mail System Internal Data" |
    		wc -l'
    	if [ "$no_messages" -gt "0" ]
    	then
    		echo "You have $no_messages e-mail message(s) waiting." > /tmp/$user_name.msg
    		echo " " >> /tmp/$user_name.msg
    		echo "Please start your e-mail client to collect mail." >> /tmp/$user_name.msg
    		/usr/bin/smbclient -M $user_name < /tmp/$user_name.msg
    	fi
    done
    
    This script isn't hard to maintain or understand, but I think the chefs in the audience will profit from the seasonings I offer here.

    A by-product of my cooking school is lots of short functions. There are those who are skeptical about adopting this approach. Let's suspend belief for just a moment as we go through the method. I'll introduce my seasonings one at a time, and then put Marcel Gagné's recipe back together at the end. Then you may judge the sauce.

    One of the languages in my schooling was Pascal, which if you recall puts the main procedure last. So, I've learned to read scripts backwards, as that's usually where the action is anyway. In Marcel Gagné's script, we come to the point in the last line, where he sends the message to each client. (I don't know Samba, but I assume this will make a suitable function):

     
    	function to_samba { /usr/bin/smbclient -M $1; }
    
    This presumes samba reads from its standard input without another flag or argument. It's used: "to_samba {user_name}", reading the standard input, writing to the samba client.

    And, what are we going to send the user, but a message indicating they have new mail. That function looks like this:

     
    	function you_have_mail {
    		echo "You have $1 e-mail message(s) waiting."
    		echo " " 
    		echo "Please start your e-mail client to collect mail."
    	}
    
    and it is used: you_have_mail {num_messages}. writing the message on the standard output.

    At this point, you've noticed a couple of things. The file names and the redirection of output and input are missing. We'll use them if we need them. But let me give you a little hint: we won't. Unix(Linux) was designed with the principle that recipes are best made from simple ingredients. Temporary files are OK, but Linux has other means to reduce your reliance on them. Introducing temporary files does a few things:

    Therefore, we seek to save ourselves these tasks. We'll see how this happens in a bit.

    A key piece of the recipe is deciding whether or not our user needs to be alerted to incoming mail. Let's take care of that now:

     
    	function num_msg { frm $1 | but_not "Mail System Internal Data" | linecount; }
    
    This is almost identical with Marcel's code fragment. We'll deal with the differences later. The curious among you have already guessed. This function is used: num_msg {user_name}, returning a count of the number of lines.

    What does the final recipe look like. All of Marcel Gagné's recipe is wrapped up in this one line of shell program:

     
    	foreach user_notify  'cat /usr/etc/local/mail_notify'
    
    And that's exactly how it's used. This single line is the entire program, supported of course, by the functions, or recipe fragments we have been building. We peeked ahead, breaking with Pascal tradition, because, after looking at some low-level ingredients, I thought it important to see where we are going at this point. You can see the value of a single-line program. It now can be moved around in your operations plan at will. You may serve your users with the frequency and taste they demand. Note, at this point, you won't have much code to change if you wanted to serve your A-M diners at 10 minute intervals beginning at 5 after the hour and your N-Z diners on the 10-minute marks.

    So what does "user_notify" look like? I toiled with this one. Let me share the trials. First I did this:

     
    	function user_notify { do_notify $(num_msg $1) $1; }
    
    thinking that if I first calculated the number of messages for the user, and supplied that number and the user name to the function, then that function (do_notify) could perform the decision and send the message. Before going further, we have to digress. In the Korn shell, which I use exclusively, the result of the operation in the expression: $( ... ) is returned to the command line. So, in our case, the result of "num_mag {user_name}" is a number 0 through some positive number, indicating the number of mail messages the user has waiting.

    This version of user_notify would expect a "do_notify" to look like this:

     
    	function do_notify { if [ "$1" -gt "0" ]; then notify_user $2 $1; fi; }
    
    This is OK, but it means yet another "notify" function, and even for this one-line fanatic, that's a bit much. So, what to do? Observe, the only useful piece of information in this function is another function name "notify_user". This is where culinary art, inspiration, and experience come in. Let's try a function which looks like this:
     
    	function foo { { if [ "$X" -gt "0" ]; then eval $*; fi }
    
    This is different than the "do_notify" we currently have. First of all, $X, is not an actual shell variable, but here the X stands for "lets see what is the best argument number to use for the numeric test". And the "eval $*" performs an evaluation of all its arguments. And here's the spice that gives this whole recipe it's flavor! The first argument may be another command or function name! A remarkable, and little used property of the shell is to pass command names as arguments.

    So, let's give "foo" a name. What does it do? If one of its arguments is non-zero, then it performs a function (it's first argument) on all the other arguments. Let's solve for X. It could be any of the positional parameters, but to be completely general, it probably should be the next one, as it's the only other one this function ever has to know about. So, let's call this thing:

     
    	if_non_zero {function} {number} ....
    
    Using another convenient shorthand, it all becomes:
     
    	function if_non_zero { [ $2 -gt 0 ] && eval $*; }
    
    and we'll see how it's used later. With this function, user_notify now looks like:
     
    	function user_notify { if_non_zero do_notify $(num_msg $1) $1; }
    
    and is used: user_notify {user_name}. Note the dual use of the first argument, which is the user's name. In one case, it is a further argument to the num_msg function which return the number for that user, in the other case, it merely stands for itself, but now as the 2nd argument to "do_notify". So, what does "do_notify" look like. We've already written the sub pieces, so, it's simply:
     
    	function do_notify { you_have_mail $1 | to_samba $2; }
    
    At this point, we have (almost) all the recipe ingredients. The careful reader has noted the omission of "but_not", "linecount", and "foreach". Permit me another gastronomic aside. Ruth Reichel, recently food editor of the New York Times, is now the editor for Gourmet magazine. One of the things she promises to do is correct the complicated recipes so frequently seen in their pages. For example, "use 1/4 cup lemon juice" will replace the paragraph of instructions on how to extract that juice from a lemon.

    In that spirit, I'll let you readers write your own "but_not" and "linecount". Let me show you the "foreach" you can use:

     
          function foreach { cmd=$1; shift; for arg in $*; do eval $cmd $arg; done; }
    
    A slightly more elegant version avoids the temporary file name:
     
          function foreach { for a $(shifted $*); do eval $1 $a; done; }
    
    which requires "shifted":
     
          function shifted { shift; echo $*; }
    
    The former "foreach", to be completely secure, needs a "typeset" qualifier in front of the "cmd" variable. It's another reason to avoid the use of variable names. This comes under the general rule that not every programming feature needs to be used.

    We need one final "Chapters of the Cookbook" instruction before putting this recipe back together. Let's imagine by now, that we are practicing student chefs and we have a little repertoire of our own. So what's an easy way to re-use those cooking tricks of the past. In the programming sense, we put them in a function library and invoke the library in our scripts. In this case, let's assume we have "foreach", "but_not", and "linecount" in the cookbook. Put that file "cookbook" either in the current directory, but more usefully, somewhere along your PATH variable. Using Marcel Gagné's example, we might expect to put it in, say, /usr/local/recipe/cookbook, so you might do this in your environment:

     
       PATH=$PATH:/usr/local/recipe
    
    and then, in your shell files, or recipes, you would have a line like this:
     
        . cookbook		#  "dot - cookbook"
    
    where the "dot" reads, or "sources" the contents of the cookbook file into the current shell. So, let's put it together:
     
    # -- Mail Notification, Marty McGowan, mcfly@workmail.com, 9/9/99
    #
      . cookbook
    # -------------------------------------------- General Purpose --
    function if_non_zero	{ [ $2 -gt 0 ] && eval $*; }
    function to_samba	{ /usr/bin/smbclient -M $1; }
    # --------------------------------------- Application Specific --
    function num_msg	{ frm $1 | but_not "Mail System Internal Data" | linecount; }
    function you_have_mail	{
    	echo "You have $1 e-mail message(s) waiting."
    	echo " " 
    	echo "Please start your e-mail client to collect mail."
    }
    function do_notify	{ you_have_mail $1 | to_samba $2; }
    function user_notify	{ if_non_zero do_notify $(num_msg $1) $1; }
    #
    # ------------------------------------------ Mail Notification --
    #
      foreach user_notify  'cat /usr/etc/local/mail_notify'
    
    On closing, there are a few things that suggest themselves here. "if_non_zero" probably belongs in the cookbook. It may already be in mine. And also "to_samba". Where does that go? I keep one master cookbook, for little recipes that may be used in any type of cooking. Also, I keep specialty cookbooks for each style that needs its own repertoire. So, I may have a Samba cookbook as well. After I've done some cooking, and in a new recipe, I might find the need for some fragment I've used before. Hopefully, it's in the cookbook. If it's not there, I ask myself, "is this little bit ready for wider use?". If so, I put it in the cookbook, or, after a while other little fragments might find their way into the specialty books. So, in the not too distant future, I might have a file, called "samba_recipe", which starts out like:
     
    # --------------- Samba Recipes, uses the Cookbook, Adds SAMBA --
    . cookbook
    # -------------------------------------------- General Purpose --
    function to_samba	{ /usr/bin/smbclient -M $1; }
    
    This leads to a recipe with three fewer lines and the cookbook has been replace with 'samba_recipes" at the start.

    Let me say just two things about style: my functions either fit on one line or not. If they do, each phrase needs to be separated by a semi-colon (;), if not, a newline is sufficient. My multi-line function closes with a curly brace on it's own line. Also, my comments are "right-justified", with two trailing dashes. Find your style, and stick to it.

    In conclusion, note how we've eliminated temporary files and variables. Nor are there nested decisions or program flow. How was this achieved? Each of these are now "atomic" actions. The one decision in this recipe, "does Marcel have any mail now?" has been encapsulated in the "if_non_zero" function, which is supplied the result of the "num_msg" query. Also, the looping construct has been folded into the "foreach" function. This one function has simplified my recipes greatly. (I've also found it necessary to write a "foreach" function which passes a single argument to each function executed.)

    The temporary files disappeared into the pipe, which was Unix's (Linux's) single greatest invention. The idea that one program might read its input from the output from another was not widely understood when Unix was invented. And the temporary names disappeared into the shell variable arguments. The shell function, which is very well defined in the Korn shell, adds greatly to this simplification.

    To debug in this style, I've found it practical to add two things to a function to tell me what's going on in the oven. For example:

     
       function do_notify	{ comment do_notify $*
    	    you_have_mail $1 | tee do_notify.$$ |  to_samba $2
    	    }
    
    where "comment" looks like:
     
          function comment { echo $* 1>&2; } 
    
    Hopefully, the chefs in the audience will find use for these approaches to their recipes. I'll admit this style is not the easiest to adapt, but soon it will yield recipes of more even consistency, both in taste and temperature. And a programming style that will expand each chef's culinary art.


    Copyright © 1999, Marty McGowan
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Micro Publishing

    By by Rick Holbert and Mark Nielsen



    1. Introduction
    2. Overview
    3. Copyright
    4. Software
    5. Hardware
    6. Fabrication
    7. Conclusion and References

    INTRODUCTION
    "Micro Publishing" or publishing "Books on Demand" has been, up until now, only a dream. Most of the pieces were already in place, (Desktop Publishing Software, Laser Printers, Imposition Software, etc.). The last key ingredient was an inexpensive way to bind and cover the books into professional looking "Perfect Bound" books.

    The future just arrived.

    OVERVIEW
    The process I'm about to describe will allow you to use FREE Linux software tools, a laser printer, contact cement, and an easy to build book binding vise to produce professional looking "Perfect Bound" paperback books for the cost of the materials.

    COPYRIGHT
    Please observe all copyright and licensing restrictions. There are plenty of "Open" books in the Linux Documentation Project, and "Public Domain" books at Project Gutenburg.

    SOFTWARE
    The primary tool used to build books with Linux is mpage. I use mpage to set up the pages for printing (four virtual pages per physical sheet). This process is called "imposition."

    Mpage uses the postscript page description language for both input and output. All the other tools are used to translate other formats into postscript, or to translate postscript into other formats.

    Additional tools include:

    TeX and LaTeX
    dvips
    PDFTeX and PDFLaTeX
    GhostScript
    Acrobat Reader

    HARDWARE
    Besides a computer capable of running Linux, you will need a laser printer (single sided, non-duplexing printers work ok), and a book binding vise.

    The book binding vise consists of a thick board measuring 10 inches by 13 inches (I made two vises from a piece of 10 by 30 inch particle board shelf), and three pieces of one inch square metallic tubing (like the kind used to make TV antenna booms). The three pieces of tubing measure 8 inches, 11 inches, and 13 inches. Holes are drilled through the board and tubing to accommodate 1/4 inch carriage bolts. For binding 5 1/2 by 8 1/2 inch books the 8 inch and 11 inch tubes are first arranged to form a T. The 8 inch tube runs vertically along the left, 10 inch side of the board. The 11 inch tube runs horizontally along the middle, 13 inch section of the board. The 11 inch tube may be optionally repositioned at the bottom, 13 inch side of the board for binding 8 1/2 by 11 inch books. The 13 inch tube runs horizontally along the top, 13 inch side of the board, and is attached with 3 inch carriage bolts and wing nuts so it can be adjusted up and down.

    FABRICATION
    Now it's time to literally put all the pieces together. Our first step is to translate our source document into postscript.

    If your source is a TeX, texinfo or LaTeX document you may use tex/latex or texi2dvi and dvips to convert it into postscript. However, be warned. The default fonts used with dvips are type 3, bit mapped founts. These look fine once printed, but they are ugly when viewed with GhostScript or Acrobat Reader, and produce large files.

    An example of the commands are as follows:

     tex filename.tex

     tex filename.tex

     makeindex filename.??

     dvips filename.dvi -o filename.ps

     or in the case of texinfo files (like the GNU docs)

     texi2dvi filename.texi

     dvips filename.dvi -o filename.ps

    A better solution is to use PDFTeX or PDFLaTeX to convert your TeX/LaTeX source document into a PDF, and then to export it to postscript.

    The command is pdftex filename.tex or pdflatex filename.tex

    For texinfo files you should run texi2dvi or tex, tex, makeindex first to create any indices or cross references first. You may also try using GhostScript to convert PDFs into postscript using

    pdf2ps filename.pdf filename.ps

    If your source is a PDF you can use GhostScript's pdf2ps command as described in the previous step, or use Acrobat Reader to print to a postscript file.

    If the PDF file is encrypted, you may need to download a GhostScript security patch from Australia. A GhostScript error message will give you the details.

    Now we're ready to use mpage to set up our 5 1/2 by 8 1/2 inch book. The pages are arranged into "signatures" in the order 4, 1, 2, 3. That way they read in the correct order when the page is folded in half.

    mpage produces two files. One for the front page pairs (i.e., 4 - 1, 8 - 5, etc.), and one for the back page pairs (i.e., 2 - 3, 6 - 7, etc.).

    The mpage commands are:

    mpage -O -b Letter -o filename.ps > filename_front.ps

    mpage -E -b Letter -o filename.ps > filename_back.ps

    You may optionally translate your two files into PDF using

    ps2pdf filename_front.ps filename_front.pdf

    ps2pdf filename_back.ps filename_back.pdf

    I find it easier to print from Acrobat Reader, and it makes distribution to other Operating Systems a lot easier.

    Now print the front pages. I like to break the job down into ten page chunks. That way, if the printer jams, or the human messes up, I've only lost a maximum of ten pages. Take the pages out, and put them back into your laser printer so that the corresponding back pages will print on the back of the pages you just printed. You will probably have to print them in reverse order, (i.e., 10 through 1, 20 through 11, etc.). You may have to experiment a bit to get your pages into the right orientation.

    Once you've printed both sides, fold them in half (a folding machine comes in real handy here), and stack them in the book binding vise. Place your legal size card stock cover under the folded pages, and align as needed. Clamp down the long tube. Score the cover twice, using a dull utility knife, or an old ball-point pen where it will fold along the spine of the book. Apply contact cement along the paper folds, and the corresponding area of the cover (between the score marks).

    Let it dry for 10 to 15 minutes, and roll the cover over the folded pages. Run your fingers along the spine of the book to ensure a strong bond. You may also use a rounded object like the side of a pen for this task.

    Loosen the clamp, carefully take the book out, fold the rest of the cover over, and place it back into the clamp (with any excess cover allowed to overlap the bottom aligning tube). Go over the spine a few more times with your finger or the side of a pen.

    Remove the book from the vice, place it horizontally on a flat surface with a weight on top of it to keep the pages flat, and let it sit over night.

    You may now trim the cover, add title stickers, or laminate as desired. Congratulations! You've just made a book

    CONCLUSION
    I hope you've enjoyed this short discussion of "Micro Publishing."  It may take a little practice, but after three or four times, your books should look fine.  Remember to follow all safety precautions when building the vise, and to use the contact cement in a well ventilated area.

    Some references are as follows:

    bookvise.pdf - book binding vise plans

    www.mesa.nl - mpage author's home page

    www.tinaja.com - Several articles about "Books on Demand", postscript, acrobat, etc.

    www.gigabooks.net - Sells ready made book binding vices along with a book describing the process.

    www.cappella.demon.co.uk - Discusses postscript markup language and additional binding processes.

    e-mail me - With your constructive comments, questions, or whatever.



    Rick works as a computer guy at TeamAmerica and Mark works as a computer guy at The Computer Underground. For some reason, these two dudes have started a company called ZING (ZING Is Not GNU, well what is GNU? GNU is Not Unix) to promote and distribute free and open software and literature. Mark doesn't know why he attached his name to this article since Rick did 95% of the work, but it looks good for his resume.


    Copyright © 1999, by Rick Holbert and Mark Nielsen
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Emacs Macros and the Power-Macros Package

    By Jesper Kjær Pedersen


    Abstract

    People sometimes tend to forget that computers are a tool that can make their life much easier. One of the things computers are especially good at, and which is easy for you to teach, is monotonic repetitive work. It gets even better: This kind of work also seems to be the work humans are worst at doing that is, monotonic repetitive work tends to be very error-prone.

    Emacs can eliminate the repetitive work with a very useful concept called macros. Macros are basically keystrokes that Emacs types for you.

    This article will teach you about Emacs macros and show you a number of useful examples. Furthermore, it will teach you about an Emacs package I written called power-macros, which makes it very easy to bind macros to keys and to save them to a file for use in later Emacs sessions.

    Defining an Emacs macro.

    Defining an Emacs macro is done by pressing C-x ( (That is press Ctrl, hold it down and press x, and then release Ctrl and x and press the opening bracket). The subsequent keystrokes will be part of your macro that is whenever you ask Emacs to execute your macro, these keystrokes will be typed for you. When you are done defining the macro, press C-x )

    When a macro has been defined you may ask Emacs to imitate your keystrokes as often as you want simply by pressing C-x e.

    Two-cent tip

    If you need to repeat macros several times, then it might be quite annoying that you need to press two keys to execute the macro defined. (That is, if you need to execute the macro three times, then you must press C-x e, C-x e, C-x e). A solution to this may be to bind execute last defined keyboard macro to a single key press. This way, you may for example bind it to shift-F1, by inserting the following code into your .emacs file:
    (global-set-key [(shift f1)] 'call-last-kbd-macro)
    

    Example: Making the current word bold

    That's it now you have learned the basics about Emacs macros, but I'm pretty sure that you haven't had the feeling yet that this would change your world much, right? To be honest, I've used Emacs for more than seven years, but until less than a year ago, I didn't see the light either... Therefore, here comes a small example to vet your appetite. More will follow later in the article.

    Imagine that you often want to make the current word in boldface (in HTML documents), you could simply do that by inserting <b> and </b> around the word. That's no big job, but if you are copy-editing a book, where you need to make words in boldface hundreds of times each hour, then a macro, that can do this may really spare you a lot of time.

    The macro is easily recorded: Go to the beginning of the word, insert <b>, go to the end of the word, insert </b>, and there you are!

    Ohhh, not so fast! There is one very important point to notice about this you are not allowed to go to the beginning respectively the end of the word by pressing the arrow key a number of times! Why not? Well if you do, then the macro will fail to find the border of the word if your word is of a different length than the word used when defining the macro. You must therefore instead use the commands forward-word and backward-word. These commands are bound to control and the arrow keys. Thus to go to the beginning of a word, simply press control and the left arrow key.

    Basically there exist two kinds of macros: those that are used now and again, and those that are used a number of times in a row and then never used again. The above is an example of a macro of the first kind. The description of the second kind is out of the scope for this article, but an example could be a macro, that added /* REMOVE: to the beginning of a line, and */ to the end of a line. You may use such a macro a number of times in a row, to comment out a whole function in C for later removal.

    Making macros more general

    In some C++ programs you will often find constructs which resemble the following:
    for (bool cont=iterator.First(value); cont; cont=iterator.Next(value)) {
      ...
    }
    
    The only difference from occasion to occasion is the names cont, iterator, value, and of course the content in between the curly brackets.

    If you insert the code above often, then you may wish to build a macro, which will help you with this. Your first attempt may be to define a macro, which simply inserts:

    for (bool =.First(); ; =.Next()) {
    }
    
    That is, a macro that simply leaves out all the parts that may change from time to time. This is, however, not as useful as it could be, simply because you would need to type cont three times, and iterator and value each two times. What you really would need was for Emacs to ask you about the names to use instead of cont,iterator, and value.

    Guess what? you can do that with macros! The trick is called recursive editing. With recursive editing you tell Emacs to stop at a specific place in the macro, you do some editing, and when you are done you tell Emacs to continue with the macro.

    When you record the macro, you may tell Emacs to enter recursive editing by pressing C-u C-x q. Then whenever you execute the macro Emacs will stop macro execution at that point and let you do some editing, and the macro will first continue when you press C-M-c (That is control-meta-c, if there is no meta key on your keyboard it is most likely the alt key instead.).

    While you record the macro, Emacs will also enter recursive editing at that point. That is, the editing you do from the point you press C-u C-x q and till you press C-M-c will not be part of the macro.

    Ok, we are almost ready to develop a very neat and useful macro, but first lets exercise what we've learned above with a simple example. Type the following:

    C-x ( Type a word ==> C-u C-x q
    
    Now type Hello World, and when done, continue typing the following:
    C-M-c <== C-x )
    
    The above inserted the following text into your buffer: Type a word ==>Hello World<==. Furthermore it also defined a macro, which inserts this text except for the words Hello World. Whenever you execute the just defined macro Emacs will pause after having inserted Type a word ==>, and when you press C-M-c, it will continue with the macro, which means that it will insert the text <==.

    Can you see where we are heading? Now we have the tools to ask the user for the three names needed, so all we need now is a way to fetch the information he typed and insert it at the appropriate places.

    Fetching the information could be done in several ways. The simplest way (that is the one, which requires the smallest knowledge about Emacs) would simply be to switch to a temporary buffer, let the user type in the information there, and whenever one of the words are needed, simply go to this buffer, and fetch it there.

    A much smarter way is to use registers. A register is a container where you may save the text of the current region for later use. To insert text into a register, mark a region, and press C-x r s and a letter (the letter indicates which of the registers to save the information to) Later you may insert the content of the register into the buffer by pressing C-x r i and pressing the letter you typed above.

    Now that's it. Below you can see all the keystrokes needed to record this macro. Text in between quotes should be typed literally, and text in italic are comments, which you should not type.

    It may seem like much to type to obtain this, but on the other hand, when you are done, you will have a very user friendly interface to inserting the given for-loops.

    "Bool: " C-space This will set the mark - that is one end of the region
    C-u C-x q Type the name of the first bool here
    C-x C-x This will make the region active
    C-x r s a Copy the just typed word to the register named a
    C-a C-k Delete the line, as the just inserted text should not be part of the buffer
    "Iterator: " Now continue as above, and save to register b
    "Value: " Once again continue and this time save to register c
    "for (bool " Now we've started to actually type the for-loop
    C-x r i a Insert the name of the boolean
    "= " C-x r i b Insert the name of the iterator
    C-e ".First(" C-x r i c The name of the value
    C-e "); " C-x r i a C-e "; " C-x r i a C-e " = " C-x r i b C-e ".Next(" C-x r i c C-e ")) {" Return "}"

    Power Macros

    Power Macros is an Emacs package, which I developed out of frustration of not being able to define a macro, bind it to a key, and have it bound there for future Emacs sessions. (Or rather, not being able to do so very easy).

    To use this Emacs package, download the file from its home page. Copy the lisp file to somewhere in your load path, and insert the following into your .emacsfile:

    (require 'power-macros)
    (power-macros-mode)
    (pm-load)
    
    If you do not know what a load path is, or do not have one, then create a directory called Emacsin your home directory, copy the file to this directory, and insert the following line into your .emacs file before the lines above:
    (setq load-path (cons "~/Emacs" load-path))
    
    When that is done, you may simply press C-c n, when you have defined a macro, and Emacs will ask you the following questions in the mini-buffer:
    Which key to bind the macro to.
    First Emacs must know which key the macro should be bound to. When you are done answering these questions, then the macro will be available simply by pressing this key, and you may that way have several macros defined at the same time.

    How should the macro be accessible.
    With power macros you may make the macro accessible in one of two ways: 1) Global - that is it is accessible in every buffer. 2) As a major mode specific macro - that is the macro is only accessible in buffers with a given major mode.

    As an example of a mode specific macro, think about the for-loop-macro from the example above. This macro is only useful when writing C++ programs. Furthermore you may need a similar macro for programming Java (which of course use Java syntax rather than C++ syntax). With power-macros you may bind both the macro for C++-mode and the macro for Java-mode to the same key (say C-M-f), and then the correct one will be used in the given mode.

    Which file should it be saved to.
    By default Emacs saves the macros defined with power-macro to the file named ~/.power-macros. If that is Ok for the macro you are defining, then simply press enter at this question. If you do not want to save the given macro to a file for future Emacs sessions, then remove the suggested text (so that you answer the question with an empty string). Finally you may of course name another file. In the section below, there is a description of when this can be of special interest.

    What is its description.
    Finally you have to write a description for the macro just defined. This will make it much easier for you to identify it later, when you have forgot which key you have bound it to, or when you are searching for a key to bind a new macro to.
    As part of binding the macro to a key, Emacs will also check if the given binding will override an existing binding. If this is the case it will warn you about this, and ask you for confirmation to continue the definition.

    Local Macros

    For some time ago I was going to give a speech on Emacs. I have done that a number of times before, so I haven't done any special preparation for this specific speech. When I was driving to the speech (by train) I decided shortly to go through my presentation anyway. I was terrified to see that the presentation program suddenly didn't work on my machine.

    So there I was, less than an hour to my speech and my presentation program didn't work! What should I do?! The answer was kind of obvious, why not make the presentation using Emacs?! Fortunately the input to the other presentation program was ASCII, and the only construct I used in the presentation was enumerated lists, so it was very easy to rewrite the presentation so it looked good in an Emacs buffer (with a slightly enlarged font). Now there was only one problem: How could I easily go forward/backward one presentation page?

    Can you guess what the answer was? Yes you are right, the answer was to create two macros. One going forward one page, and another going backward one page.

    Going forward one page was done the following way:

    1. Search for a line starting with a number of equal signs. This was namely the second line of each presentation page (Just below the title of the page).
    2. Press C-1 C-l (that is control-one control-el) This would locate this line as the second line of the screen. And consequently the title of the page would thus be the first one.
    3. Go to the beginning of the next line. This was necessary so the subsequent search would not find the current page.
    The two macros, just defined, is only useful for the given file (and later to all the files, which contains a presentation made for viewing with Emacs). Therefore it would be a bit annoying to have these macros defined and bound to keys all the times, especially given that there might be several month before my next Emacs presentation.

    The two macros should therefore be saved to a separate file, and whenever needed I could simply load them. Loading a power macro is done by using the function pm-load. Thus I could load the macros by pressing M-x typing pm-load, pressing enter, and typing the name of the file to load. Loading the macros for the presentation could be done even more automatic, by inserting the following lines as the very last lines of the file:

    Local Variables:
    eval: (pm-load "presentation.macro")
    End:
    
    In the above it is assumed that the name of the file containing the macros is called presentation.macro.

    Now Emacs automatically loads the presentation macros whenever the file is opened.

    Managing Power Macros

    When you have defined a number of macros, you might want to do various managing functions on your macros. This is done by pressing C-c m. This will bring up a buffer as the one you can see below:


    What you see in this buffer is your power macros, each separated with a line of dashes. Many of the keys have special meaning in this buffer (Just like the keys have special meaning in the buffer managing buffer or in the dired buffer).

    By pressing the enter key on top of one of the fields lets you edit the given field. Editing a field does in fact mean either to change its content, or to copy the macro to a new one with the given field changed. You specify whichever of these meanings you intend, when you have pressed enter on the field.

    Deletion of macros is done in two steps, first you mark the macros which you want to delete, and next you tell Emacs to actual delete them. If you know either the buffer managing buffer or dired-mode, then you will be familiar with this two step process.

    The End

    If your appetite has been vet to learn more about Emacs, then I can inform you that I'm the author of a book on Emacs called "Sams Teach Yourself Emacs in 24 Hours". (ISBN: 0-672-31594-7). To learn more about this book, please visit it's home page on the URL http://www.imada.sdu.dk/~blackie/emacs/. This is also the page you should visit if you want to download the power-macro package.

    Jesper Kjær Pedersen <blackie@ifad.dk>


    Copyright © 1999, Jesper Kjær Pedersen
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Backup for the Home Network

    By JC Pollman and Bill Mote


    Everyone has a backup plan. Unfortunately, most of us use the "No Backup" plan.

    Disclaimer: This article provides information we have gleamed from reading the books, the HOWTOs, man pages, usenet news groups, and countless hours banging on the keyboard. It is not meant to be an all inclusive exhaustive study on the topic, but rather, a stepping stone from the novice to the intermediate user.  All the examples are taken directly from our home networks so we know they work.

    How to use this guide:

    Prerequisites: If you have Linux installed, you will have everything you need.

    Backup Plan: For the home network, you have to have some sort of backup plan.  Although hard drives will crash, the real value in the backups is restoring accidentally deleted, or changed, files. Sooner or later you will delete, or change, something important, and without a backup, you could render your computer unbootable.  I am embarrassed to admit this, but I actually deleted /root on one occasion. Note: backups should be considered compromised if you have been cracked.  Backup plans need to be simple to implement or they will not get done - especially at home. A  backup plan for home should cover two areas: how much are you going to backup, and how are you going to do it with the least amount of effort.

    How much to backup: I try to minimize the amount I backup because storage space costs money.  I only backup directories, not the entire file system. Most of /usr and /opt are on the install cdrom, so if the hard drive crashes, I will install them by default with a new install.  /etc and /home are the most important as they contain the configuration and custom settings files.  Your backup plan should include full backups of the selected directories every so often, and then backup just the changes (incremental backups) daily.

    How to backup: tape drives are usually too expensive for the home network, and floppies are impractical. (Note: I gave up on floppies when the disk count went over 132!)  We believe the best compromise is using a spare hard drive.  Notice we said hard drive and not partition! Every time I have had problems with hard drives, the entire drive died or became corrupted, not just a partition. Hard drives are so cheap that using one solely for backups is the most cost efficient method. It is not the most secure way to save your files as a cracker can get to them, but there are limits to how far we are willing to go to make home backups.

    Backup Programs: There are three common programs used for backups that come with almost all un*x distributions: tar, cpio, and dump.  Each has its strengths and weaknesses.

    TAR: Tar is the most commonly used backup program for small networks. It has been around quite a while and will likely remain for quite some time.  Most people do not know, however, that although tar was designed to put files on tapes, it was not designed for backups. Instead, its purpose is to put the files on the tape so they can be installed on other computers. As such, its incremental backup function is weak.

    CPIO:  cpio is similar to tar in that it does not have an incremental backup function. In fact, it does not even have a "file list" function: you have to feed it the name of the files you want to archive by piping them from the find program.  cpio has two advantages over tar: it creates a smaller uncompressed archive, and it does not die if part of the archive is corrupted.

    DUMP: dump is completely different from tar or cpio.  It backups up the entire file system - not the files. dump does not care what file system is on the hard drive, or even if there are files in the file system. It dumps one file system at a time, quickly and efficiently, and it supports 9 levels of incremental backups. Unfortunately, it does not do individual directories, and so, it eats up a great deal more storage space than tar or cpio.

    Our Backup Solution: Click here to see our backup script - named run-backup. Save it your hard drive and then make it executable by typing:

    chmod 777 run-backup [Enter]
    What part of the script you need to modify: This script is designed to run on any computer by changing only the four variables: COMPUTER, DIRECTORIES, BACKUPDIR, and TIMEDIR.  Currently we are running it on 2 linux boxes and 2 solaris boxes. The BACKUPDIR is nfs mounted on our machines, but it could be another hard drive on the computer. We suggest that you set this script up and run it for a month before making major changes.

    What the script does: when the script is run, it first looks to see if today is the first day of the month. If it is, it makes a full backup of the files listed in the variable DIRECTORIES, names the tar ball after the computer and date, e.g. myserver-01Nov.tgz and puts it in the BACKUPDIR directory. Since this is a unique file name, it will stay in the BACKUPDIR until you delete it.  Next, if today is not the first of the month, but it is Sunday, the script will make a full backup of the DIRECTORIES, and overwrite the Sunday file in BACKUPDIR.  In other words, there is only one Sunday file in the backupdir and it is overwritten every Sunday. That way we do not waste much space on the hard drive but still have a full backup that is at most one week old. The script also puts Sunday's date in the TIMEDIR directory. If today is not the first or a Sunday, the script will make an incremental backup of all the files that have changed since Sunday's full backup. As such, each day's backup after Sunday should get larger than the last.  This is the trade-off: you could do an incremental backup of just the files that changed in the last 24 hours and keep each day's backup quite small, but if your hard drive goes south on Friday, you will have to restore Sunday's, Monday's, Tuesday's, Wednesday's and Thursday's backups.  By doing an incremental backup from Sunday each day, the backups are larger, but you only have to restore Sunday's and Thursday's backup. Here is an abbreviated look at the backup directory:

    root   828717 Oct  1 16:19 myserver-01Oct.tgz
    root    14834 Oct 22 01:45 myserver-Fri.tgz
    root     5568 Oct 18 01:45 myserver-Mon.tgz
    root    14999 Oct 23 01:44 myserver-Sat.tgz
    root  1552152 Oct 24 01:45 myserver-Sun.tgz
    root     5569 Oct 21 01:45 myserver-Thu.tgz
    root     5570 Oct 19 01:45 myserver-Tue.tgz
    root     5569 Oct 20 01:45 myserver-Wed.tgz


    How to run the script: We run this script as a cron job at one o'clock in the morning every day. If you need help with cron, click here. Note: the incremental backups need the time of the Sunday backup. If you start in the middle of the week, you need to create the time file in the TIMEDIR. Using the script above as an example, the file's name is: myserver-full-date, and its consists of a single line:

    26-Sep

    Restoring: Restoring is relatively easy, with only one thing to remember: tar does not include the leading / on files. So,if you wanted to restore /etc/passwd, you would first have to cd to /, and then type:

    tar -zxvf {wherever_file_is}/myserver-Sun.tgz  etc/passwd
    Next month we will be discussing dhcp.


    Copyright © 1999, JC Pollman and Bill Mote
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Running UNIX At Home

    By Rob Reid


    I started using UNIX years ago at school, so when Linux came along I eagerly installed it on my home computer so that I could have the same wonderful operating system in both places. Linux has worked amazingly well for me, but after a while I noticed that it wasn't completely adapted for home use. "locate"'s database wasn't getting updated, the log files kept growing and growing, and the startups and shutdowns were taking a fair chunk out of my day. This was because UNIX computers traditionally stay on all the time, while home computers tend to be frequently turned off.

    None of my cron jobs, like updating locate's database and trimming the log files, were being done since the computer was hardly ever on in the wee hours of the morning, the time chosen by the distributions (Slackware, then Red Hat 3.0.3, then 5.1) for housecleaning. Very early in the morning is perfect for computers that stay on all the time, since that's when there are the fewest users to be upset by the somewhat disruptive janitorial jobs, but I was unwilling to leave my computer on all them time just to make cron happy. I ruled out changing the job running time to something during the day, since I tend to run my home computer at unpredictable times for a few hours. The only way I could be sure the jobs would be done would be to run them hourly instead of daily or weekly. That would soon get annoying. My solution, the following script, was to combine an hourly cron with batch, and to check whether the job had already been done satisfactorily recently. The hourly cron is frequent enough that it will probably get a chance while I have the computer on, but batch minimizes my annoyance by only running the jobs when the computer isn't too busy, like when I've gone for a snack. The timestamp check cancels the job if it's already been done in the last week/month/etc.

    groundskeeper (Bash script)

    As you probably know, speeding up the startups and shutdowns is a matter of not starting daemons you'll never need. I've taken it a bit further by often not starting services that I often *do* need. Craziness? No. We all use SysV runlevels, now, right? (When I started using Linux, with Slackware, this wasn't the case, but I hope that even the most ardent BSDers have seen the desirability of runlevels.) I was using runlevel 3 as my normal operating mode, had a never used runlevel 4, and noticed that about half the time in booting 3 was spent on network things. About half the time when I turn on my computer, I'm not going to use my modem at all, so I set up runlevel 4 as "3 without network stuff". Now when I want to use my modem I boot normally, but if I know I won't be using it I type "linux 4" at the LILO prompt and save a lot of time. No reconfiguration of LILO was necessary. I haven't needed to yet, but I could use my modem in 4 by becoming root and running the network starter scripts by hand, and stopping them when I'm done. One of these days I should write a script to automate that, but I'm lazy. Red Hat provides a runlevel editor in their control-panel, but it is also easy to do from the command line by playing around in the /etc/rc.d/* directories.

    If you're not sure which services you can safely eliminate, here's a listing of my /etc/rc.d/rc[34].d directories as a sample. Your requirements will probably be different, however.

    rc3.d:               rc4.d:          
    K08autofs	     K08autofs      
    K09keytable	     K09keytable    
    K10named	      
    K15gpm		     K15gpm         
    K15sound	     K15sound       
    K30sendmail	     
    K45sshd		         
    K50inet		         
    K55routed	         
    K59crond	     K59crond       
    K60atd		     K60atd         
    K60lpd		     K60lpd		         		         
    K65portmap	     K65portmap     
    K80random	     K80random      
    K97network	         
    K99syslog	     K99syslog      
    S01kerneld	     S01kerneld     
    S10network	         
    S20random	     S20random      
    S30syslog	     S30syslog      
    S40atd		     S40atd         
    S40crond	     S40crond      
    S40portmap	     S40portmap 
    S50inet		     
    S55named	     
    S55sshd		     
    S60lpd		     S60lpd		     
    S72autofs	     S72autofs  
    S75keytable	     S75keytable
    S80sendmail	        
    S85gpm		     S85gpm     
    S85sound	     S85sound   
    S99local	     S99local
    

    Another, very optional, thing you can do is run 'tune2fs' on your ext2 filesystems to increase the number of mounts before they get fscked. Read the man page first, and I have no idea what the ideal number is.


    Copyright © 1999, Rob Reid
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Developing Web Applications at Home - Part 1

    By Anderson Silva


    One of my favorite things about linux is that it allows me to have a full-featured server at home for a very small price. I have a 3 computer network at home, and my router is a simple Intel Pentium 133 w/ 32 MB RAM and 1.7 Gb HD.

    That machine which run on Red Hat 6.0 is my router, dns server, firewall/proxy server, samba server, and my web server, and it runs great. I must tell you that the only reason that I shut that server off is when there are thunderstorm warning in my city, but other than that the machine runs flawless.

    And the reason for this article is to allow you to run your own web applications of your computer, even if your machine is a small Pentium 133 like mine. I normally, write articles that are aimed for the newbies simply because I think they need much more support than the "older" guys do, and this article is no different.

    I would like to introduce to you a scripting language called PHP. And for you that prefer another language such as PERL, ASP or Cold Fusion, all I can say is "don't get mad at me just because I did not choose your favorite language".

    PHP is a server dependent scripting language that can be embedded on HTML, and according to its documentation it was created "sometime in the fall of 1994". If you decide to play around with PHP you will notice that its syntax is very similar to C, so if you have any programming experience with C, C++ or even Java, programing on PHP should be a breeze.

    The greatest thing about PHP is that it allows you to make web sites that will interface with several types of databases. A few examples are:

    This article will show you how to install PHP version 3 (PHP3) on a RedHat System that is using MySQL as its database. Note: RedHat's full install will install PHP3 all ready to work with PostgreSQL database.


    1. Installing MySQL:

    You can download MySQL from:

            http://www.mysql.com/download_3.22.html

    If you are running Red Hat, I would recommend to you to download the RPMs for the database. Download:

    1. The Server - MySQL-3.22.27-1.i386.rpm

    2. The Client - MySQL-client-3.22.27-1.i386.rpm

    3. The Development Libraries - MySQL-devel-3.22.27-1.i386.rpm

    Note: MySQL 3.22.27 is the most recent-stable version as of the day this article was written.

    Once you have downloaded all three files, run the following command as root:

    rpm -ihv MySQL-*

    This should install all of the MySQL packages you have downloaded.


    1. Learning MySQL:


    Learning the basics of MySQL should not be a challenge because of two main reasons:


    1. Online Documentation is very well organized, and helpful.

      It can be found at: http://www.mysql.com/doc.html

    2. Graphical User Interfaces that are available on the web to make MySQL administration much easier.

      You can find a whole list of GUI Clients for MySQL at: http://www.mysql.com/Contrib/


    1. Installing PHP3:


    As I said in the beginning of this article RedHat already comes with the RPM for the installation of PHP3, but by default it is setup to support PostgreSQL. And to make this RPM work with MySQL is not hard at all, thanks to great F.A.Q. whic can be found at the PHP official web site (http://www.php.net).


    To solve this problem I quote the F.A.Q. section from the PHP web site.

    3.3 I installed PHP using RPMS, but it doesn't compile with the database support I need! What's going on here?

    Due to the way PHP is currently built, it is not easy to build a complete flexible PHP RPM. This issue will be addressed in PHP4. For PHP, we currently suggest you use the mechanism described in the INSTALL.REDHAT file in the PHP distribution. If you insist on using an RPM version of PHP, read on...

    Currently the RPM packagers are setting up the RPMS to install without database support to simplify installations AND because RPMS use /usr/ instead of the standard /usr/local/ directory for files. You need to tell the RPM spec file which databases to support and the location of the top-level of your database server.

    This example will explain the process of adding support for the popular MySQL database server, using the mod installation for Apache.

    Of course all of this information can be adjusted for any database server that PHP supports. I will assume you installed MySQL and Apache completely with RPMS for this example as well.

    First remove mod_php3

    rpm -e mod_php3

    Then get the source rpm and INSTALL it, NOT --rebuild

    rpm -Uvh mod_php3-3.0.5-2.src.rpm

    Then edit the /usr/src/redhat/SPECS/mod_php3.spec file

    In the %build section add the database support you want, and the path.

    For MySQL you would add --with-mysql=/usr \

    The %build section will look something like this:

    ./configure --prefix=/usr \

    --with-apxs=/usr/sbin/apxs \

    --with-config-file-path=/usr/lib \

    --enable-debug=no \

    --enable-safe-mode \

    --with-exec-dir=/usr/bin \

    --with-mysql=/usr \

    --with-system-regex

    Once this modification is made then build the binary rpm as follows:

    rpm -bb /usr/src/redhat/SPECS/mod_php3.spec

    Then install the rpm

    rpm -ivh /usr/src/redhat/RPMS/i386/mod_php3-3.0.5-2.i386.rpm

    Make sure you restart Apache, and you now have PHP with MySQL support using RPM's. Note that it is probably much easier to just build from the distribution tarball of PHP and follow the instructions in INSTALL.REDHAT found in that distribution.



    Another problem is that some distributions (including RedHat) that also come with PHP3 installed, don't have PHP3 activated on Apache's configuration file. To solve this problem again we count on the PHP3 F.A.Q. session.



    I installed PHP using RPMS, but Apache isn't processing the PHP pages! What's going on here? Assuming you installed Apache PHP completely with RPMS, you need to uncomment or add some or all of the following lines in your http.conf file:
    # Extra Modules
    AddModule mod_php.c
    AddModule mod_php3.c
    AddModule mod_perl.c
    
    # Extra Modules
    LoadModule php_module modules/mod_php.so
    LoadModule php3_module modules/libphp3.so
    LoadModule perl_module modules/libperl.so
    

    And add:
    AddType application/x-httpd-php3 .php3
    To the global properties, or to the properties of the VirtualDomain you want to have PHP support added to.

    If you have successfully installed MySQL, re-installed PHP3, and activated PHP3 in you Apache configuration you should be all set to start using PHP3.

    Note: Once you are done changing the Apache configuration make sure you restart it.


    Quick Test:

    This is a quick test for you to try, and see if php3 is running correctly in your system:


    Go to your web root directly (RH systems at: /home/httpd/html), and create the following file and name it phptest.php3.


    <?

    echo "<HTML>\n";

    echo "<HEAD><TITLE>Hello World!</TITLE></HEAD>\n";

    echo "Testing PHP3 with Hello World!\n";

    ?>


    And the open you web browser, and you should get just the formatted web site. If you do get that, you should be all set to start using PHP3. If you have not been able to get the right results, I would suggest you to check out PHP's web site at: http://www.php.net


    Next month, I will send in a couple of more complex examples with some data entry on a MySQL database.




    The texts that are found inside a table were extracted from the PHP3 Web Site.


    PERMISSION NOTICE:

    Javascript/PHP code used with permission of the PHP

    Development Team.

    Copyright 1998. All rights reserved.

    For more information on the PHP Development Team and

    the PHP project, please see <http://www.php.net>.







    Copyright © 1999, Anderson Silva
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    LSOTM (Linux Site O' The Month): LinuxNewbie.org

    By Slambo


    What's This?

    This article is the first in an ongoing series of site reviews for the Linux community. Each month, I will highlight a Linux-related site and tell you all about it. The intent of these articles is to let you know about sites that you might not have been to before, but they will all have to do with some aspect of Linux. Now, on with the story...

    LinuxNewbie.org (LNO)

    Let's face it, learning a new technology (be it 3 or 30 years old) can be intimidating to a newbie. The average Joe who is new to Linux may not know anything beyond where the on/off switch is. Luckily, for those just starting down the One Linux Way, there are places to learn. From LNO's about page:
    "Linuxnewbie.org is a place where anyone can write their tips and tricks and submit them for publication. They are subject to review or possibly testing, frankly we don't know how this is going to work out, but we think if it does work out, the site will do everyone a great service."

    This site has a lot to offer for the newbie (well, from the page name, you might have guessed this), including "Newbieized Help Files", Forums, Articles, Book Reviews and Book Recommendations, along with news about Linux and the Open Source community.

    This site's specialty is the NHFs. Basically, they are HOWTO files for newbies. Before you get all up in arms about it, they didn't "dumb down" the HOWTO files. Rather, they wrote new articles that describe how to do specific functions like setup an ISA PnP modem or truetype font support in X Windows. Most NHFs include a brief introduction and a list of commands that will be needed to perform a certain action (much like the list of tools needed for a woodworking project at the beginning of its instructions) followed by specific steps, and almost always walking through the steps with an example.

    The NHFs are split, first into Intel vs. Mac architecture (there aren't any entries for Alpha or other processors yet, but I wouldn't be surprised to see them someday), then into more specific categories like: Network, Modems, X Windows, Security and Sound. Like the bit from the about page says, the NHFs do get reviewed, but not by some elite cadre of gurus tucked away in a basement with only an open account at the local pizza parlor. The NHFs are reviewed by everyone. Anyone is welcome to send a comment on any NHF, and, if the comment contains additional technical information, it will get added to the NHF page. Furthermore, everyone is encouraged to write NHFs for inclusion in the site content.

    Since the site is still young, there aren't as diverse a range of NHFs as one might wish for (whatever project I'm working on is the one that doesn't have any information anywhere). However, the site's forums, using the popular Ultimate Bulletin Board software, fill the gap covering topics like: scripts, games, programming and technical support.

    On the Bookshelf are recommended volumes for any Linux hacker. Naturally, there are some works from O'Reilly, but others, where appropriate, are also included. Additional information on these works is linked from the Bookshelf, and some are covered in more detail in the book reviews of the Articles section.

    The only thing that is really missing from this site is a search engine. There is a large amount of information on this site, but most of it ends up in the forums, due to the nature of contributions. However, this is the kind of site that you will want to explore on your own, just reading and following along the links.

    So take the time to visit and explore this site. The wealth of information available will make it worthwhile to read.


    Copyright © 1999, Slambo
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    T/TCP: TCP for Transactions

    By Mark Stacey


    T/TCP is an experimental extension for the TCP protocol. It was designed to address the need for a transaction-based transport protocol in the TCP/IP stack. TCP and UDP are the current choices available for transaction-based applications. Both of these protocols have their advantages and disadvantages. TCP is reliable but inefficient for transactions whereas UDP is unreliable but highly efficient. T/TCP sits between these two protocols making it an alternative for certain applications.

    Currently, a number of flavours of UNIX support T/TCP. SunOS 4.1.3 (a Berkeley-derived kernel) was the very first implementation of T/TCP and made available in September 1994. The next implementation was for FreeBSD 2.0 and released in March 1995. For my final year project, I implemented T/TCP for Linux for the University of Limerick in April 1998. The source code is available at http://www.csn.ul.ie/~heathclf/fyp/.

    In this article, I discuss the operation, advantages and flaws of T/TCP. This will allow application developers to decide when T/TCP is appropriate for networking applications. I present my results of a comparative analysis between T/TCP and TCP based on the number of packets per session for each transaction, as well as my conclusions on a case study into the possible impact of T/TCP on the World Wide Web.

    1 Introduction

    The TCP/IP reference model is a specification for a networking stack on a computer. It exists to provide a common ground for network developers. This allows easier interconnection of the different vendor supplied networks, reducing the cost of installing completely new networks in order for one to work with another.

    The most popular implementation of the transport layer in the reference model is the Transmission Control Protocol (TCP). This is a connection-oriented protocol. Another popular implementation is the User Datagram Protocol (UDP), which is a connectionless protocol.

    Both of these protocols have advantages and disadvantages. The two main aspects of the protocols make them useful in different areas. UDP is a connectionless protocol. UDP always assumes that the destination host received the data correctly. The application layer above it looks after error detection and recovery. Even though UDP is unreliable, it is quite fast and useful for applications, such as DNS (Domain Name System) where speed is preferred over reliability. TCP, on the other hand, is a reliable, connection-oriented protocol. It looks after error detection and recovery. Data is retransmitted automatically if a problem is detected. As a result of being more reliable, TCP is a slower protocol than UDP.

    In recent years, with the explosion of the Internet, a need for a new specification arose. The current transport protocols were either too verbose or not reliable enough. A protocol was needed that was faster than TCP but more reliable than UDP. These protocols lie at either end of the scale in taking into account speed and reliability. TCP has reliability at the cost of speed, whereas UDP has speed at the cost of reliability. A standard was needed, that would allow the reliable transmission of data at a faster rate than the current TCP standard. This new protocol could reduce bandwidth and increase the transmission speed of data.

    TCP for Transactions (T/TCP) is envisioned as the successor to both TCP and UDP in certain applications. T/TCP is a transaction-oriented protocol based on a minimum transfer of segments, so it does not have the speed problems associated with TCP. By building on TCP, it does not have the unreliability problems associated with UDP. With this in mind, RFC1379 was published in November 1992. It discussed the concepts involved in extending the TCP protocol to allow for a transaction-oriented service, as opposed to a connection-oriented service for TCP and a connectionless service for UDP. Some of the main points that the RFC discussed were the bypassing of the 3-way handshake and the shortening of the TIME-WAIT state from 240 seconds to 12 seconds. T/TCP cuts out much of the unnecessary handshaking and error detection of the current TCP protocol and as a result increases the speed of connection and reduces the necessary bandwidth. Eighteen months later, RFC1644 was published, with the specification for Transaction TCP.

    2 Transaction Transmission Control Protocol

    T/TCP can be considered a superset of the TCP protocol. The reason for this is that T/TCP is designed to work with current TCP machines seamlessly. If a TCP host tries to connect to a T/TCP host, the T/TCP host will respond with the original TCP 3-way handshake. What follows is a brief description of T/TCP and how it differs to the current TCP standard in operation.

    2.1 What is a Transaction?

    The term transaction refers to the request sent by a client to a server along with the server's reply. RFC955 lists some of the common characteristics of transaction processing applications:

    2.2 Background to T/TCP

    The growth of the Internet has put a strain on the bandwidth and speed of networks. With more users than ever, the need for a more efficient form of data transfer is needed.

    The absolute minimum number of packets required for a transaction is two: one request followed by one response. UDP is the one protocol in the TCP/IP protocol stack that allows this, the problem here being the unreliability of the transmission.

    T/TCP solves these problems to a large degree. It has the reliability of TCP and comes very close to realizing the two-packet exchange (three in fact). T/TCP uses the TCP state model for its timing and retransmission of data, but introduces a new concept to allow the reduction in packets.

    Even though three packets are sent using T/TCP, the data is carried on the first two; thus, the applications can see the data with the same speed as UDP. The third packet is the acknowledgment to the server by the client that it has received the data, which incorporates the TCP reliability.

    2.3 Basic Operation

    figure

    Figure 1. Time Line of T/TCP/Client-Server Transaction

    Consider a DNS system, one where a client sends a request to a server and expects a small amount of data in return. A diagram of the transaction can be seen in Figure 1. This diagram is very similar to a UDP request. In comparison to the TCP 3-way handshake in Figure 2 it can be seen that an equal number of packets are required for this transaction and the 3-way handshake. Whereas with TCP three packet transmissions are associated with the establishment of a connection alone (with nine altogether), a total of three packet transmissions are associated with the whole process--a savings of 66% in packets transferred compared to TCP. Obviously, in cases where a large amount of data is being transferred, more packets will be transmitted, resulting in a decrease in the percentage saving. Timing experiments have shown a slightly longer time is required for T/TCP than for UDP, but this is a result of the speed of the computer and not the network. As computers get more powerful, the performance of T/TCP will approach that of UDP.

    figure

    Figure 2. TCP 3-way handshake

    2.4 TCP Accelerated Open

    The TCP Accelerated Open (TAO) is a mechanism introduced by T/TCP designed to cut down the number of packets needed to establish connection with a host.

    T/TCP introduces a number of new options. These options allow the establishment of a connection with a host using the TAO. T/TCP uses a 32-bit incarnation number, called a connection count (CC). This option is carried in the options part of a T/TCP segment, Figure 3. A distinct CC value is assigned to each direction of an open connection. Incremental CC values are assigned to each connection that a host establishes, either actively or passively.

    figure

    Figure 3. TCP Header

    The 3-way handshake is bypassed using the CC value. Each server host caches in memory (or in a file) the last valid CC value it received from each different client host. This CC value is sent with the initial SYN segment to the server. If the initial CC value for a particular client host is larger than the corresponding cached value, the property of the CC options (the increasing numbers) ensures the SYN segment is new and can be accepted immediately.

    The TAO test fails if the CC option that arrives in the SYN segment is smaller than the last CC value received and cached by the host or if a CCnew option is sent. The server then initiates a 3-way handshake in the normal TCP/IP fashion. Thus, the TAO test is an enhancement to TCP, with the normal 3-way handshake to fall back on for reliability and backward compatibility.

    2.5 Truncation of TIME-WAIT

    The TIME-WAIT state is a state entered by all TCP connections when the connection has been closed. The length of time for this state is 240 seconds to allow any duplicate segments still in the network from the previous connection to expire. The introduction of the CC option in T/TCP allows for the truncation of the TIME-WAIT state. The CC option provides protection against old duplicates being delivered to the wrong incarnation of a given connection.

    Time constraints are placed on this truncation, however. Because the CC value from the host is monotonically increasing, the numbers may wrap around from the client host. A CC value that is the same as some duplicate segments from the previous incarnation can be encountered. As a rule, the truncation can only be performed whenever the duration of the connection is less than the maximum segment lifetime (MSL). The recommended value for the MSL is 120 seconds. As with the original TCP, the host that sends the first FIN is required to remain in the TIME-WAIT state for twice the MSL once the connection is completely closed at both ends. This implies that the TIME-WAIT state with the original TCP is 240 seconds, even though some implementations of TCP have the TIME-WAIT set to 60 seconds. Stevens shows how the TIME-WAIT state for T/TCP may be shortened to 12 seconds.

    CC options do have problems when used on networks with high-speed connections. This is rarely a problem on older networks, but with FDDI and gigabit Ethernets becoming more frequent, the wrapping of the CC value will become more frequent. In this situation, the CC value may wrap around fast enough for problems to occur. Whereas CC options are not sufficient in certain conditions, the PAWS (protection against wrapped sequences) option adds another layer of security against this problem.

    2.6 Examples

    T/TCP can be beneficial to some of the applications which currently use TCP or UDP. At the moment, many applications are transaction-based rather than connection-based, but still must rely on TCP along with the overhead. UDP is the other alternative, but not having time-outs and retransmissions built into the protocol means the application programmers must supply the time outs and reliability checking. Since T/TCP is transaction-based, there is no set-up and shutdown time, so the data can be passed to the process with minimal delay.

    2.6.1 HTTP and RPC

    Hypertext Transfer Protocol is the protocol used by the World Wide Web to access web pages. The number of round trips used by this protocol is more than necessary. T/TCP can be used to reduce the number of packets required.

    HTTP is the classic transaction style application. The client sends a short request to the server requesting a document or an image and then closes connection. The server then sends on the information to the client. T/TCP can be used to improve this process and reduce the number of packets on the network.

    With TCP, the transaction is accomplished by connecting to the server (3-way handshake), requesting the file (GET file), then closing the connection (sending a FIN segment). T/TCP operates by connecting to the server, requesting the document and closing the connection all in one segment (TAO). It is obvious that bandwidth has been saved.

    Remote Procedure Calls also adhere to the transaction style paradigm. A client sends a request to a server for the server to run a function. The results of the function are then returned in the reply to the client. Only a tiny amount of data is transferred with RPCs.

    2.6.2 DNS

    The Domain Name System is used to resolve host names into the IP addresses that are used to locate the host. To resolve a domain name, the client sends a request with the IP address or a host name to the server. The server responds with the host name or IP address where appropriate. This protocol uses UDP as its underlying process.

    As a result of using UDP, the process is fast but not reliable. Furthermore, if the response by the server exceeds 512 bytes of data, it sends the data back to the client with the first 512 bytes and a truncated flag. The client has to resubmit the request using TCP, since there is no guarantee that the receiving host will be able to reassemble an IP datagram exceeding 576 bytes. For safety, many protocols limit the user data to 512 bytes.

    T/TCP is the perfect candidate for the DNS protocol, because of its speed and reliability.

    2.7 Summary

    T/TCP provides a simple mechanism that allows the number of segments involved in a data transmission to be reduced--the TAO. This test allows a client to open a connection, send data and close a connection all in one segment. With TCP, opening a connection, transmission of data and the closing of the connection are all completely separate processes.

    The highest savings result with small data transfers. This leads to the conclusion that T/TCP favors situations with small amounts of data to be transferred. HTTP, RPCs and DNS are protocols that require the exchange of small amounts of data.

    3. Testing and Analysis

    In order to investigate the benefits or drawbacks of this implementation of T/TCP, it is important to both test its operation and compare it to the original TCP/IP operation. I performed these tests using the Linux 2.0.32 kernel with T/TCP modifications and FreeBSD version 2.2.5 that already implements T/TCP.

    3.1 Operation Examples

    This section demonstrates the operation of the protocol under various conditions.

    3.1.1 Client Reboot

    In this scenario, I rebooted the client and the TAO cache has been reinitialized.

    When the client attempts a connection with a server, it finds that the latest CC value received from the server is undefined. Hence it sends a CCnew option to indicate that a 3-way handshake is needed.

    The sequence of segments below conforms to the protocol implementation.

    elendil.ul.ie.2177 > devilwood.ece.ul.ie.8888: SFP 3066875000:3066875019(19) win 15928 <mss 1460,nop,nop,ccnew 10> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2177: S 139872882:139872882(0) ack 3066875001 win 17424 <mss 1460,nop,nop,cc 3, nop,nop,ccecho 10> (DF)
    
    elendil.ul.ie.2177 > devilwood.ece.ul.ie.8888: F 20:20(0) ack 1 win 15928 <nop,nop,cc 10> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2177: . ack 21 win 17405 <nop,nop,cc 3> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2177: FP 1:31(30) ack 21 win 17424 <nop,nop,cc 3> (DF)
    
    elendil.ul.ie.2177 > devilwood.ece.ul.ie.8888: . ack 32 win 15928 <nop,nop,cc 10> (DF) 3.1.2 Normal T/TCP Transaction
    
    Once the client has completed its first transaction with the server, the CC value in the TAO cache will contain a number. This allows the client to send a normal CC option, indicating to the server that it may bypass the 3-way handshake if possible.

    The client and the server hold state information about the other host, so the TAO test succeeds and the minimal 3-segment exchange is possible.

    elendil.ul.ie.2178 > devilwood.ece.ul.ie.8888: SFP 2021229800:2021229819(19) win 15928 <mss 1460,nop,nop,cc 11> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2178: SFP 164103774:164103804(30) ack 2021229821 win 17424 <mss 1460,nop,nop,cc 4, nop,nop,ccecho 11>
    (DF)
    
    elendil.ul.ie.2178 > devilwood.ece.ul.ie.8888: . ack 32 win 15928 <nop,nop,cc 11> (DF)
    

    3.1.3 Server Reboot

    If the server is rebooted after the previous two tests, all the state information about the host will be lost.

    When the client request arrives with a normal CC option, the server forces a 3-way handshake since the CC value received from the client is undefined. The SYNACK segment forces the 3-way handshake when only the client SYN and not the data are acknowledged.

    elendil.ul.ie.2141 > devilwood.ece.ul.ie.8888: SFP 2623134527:2623134546(19) win 15928 <mss 1460,nop,nop,cc 9> (DF)
    
    arp who-has elendil.ul.ie tell devilwood.ece.ul.ie
    
    arp reply elendil.ul.ie is-at 0:20:af:e1:41:4e
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2141: S 25337815:25337815(0) ack 2623134528 win 17424 <mss 1460,nop,nop,cc 2, nop,nop,ccecho 9> (DF)
    
    elendil.ul.ie.2141 > devilwood.ece.ul.ie.8888: F 20:20(0) ack 1 win 15928 <nop,nop,cc 9> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2141: . ack 21 win 17405 <nop,nop,cc 2> (DF)
    
    devilwood.ece.ul.ie.8888 > elendil.ul.ie.2141: FP 1:31(30) ack 21 win 17424 <nop,nop,cc 2> (DF)
    
    elendil.ul.ie.2141 > devilwood.ece.ul.ie.8888: . ack 32 win 15928 <nop,nop,cc 9> (DF)
    

    3.1.4 Request or Reply Exceeds MSS

    If the initial request exceeds the maximum segment size allowed, the request will have to be fragmented.

    When the server receives the initial SYN with just the data and no FIN, depending on the time outs, it either responds with a SYNACK immediately or waits for the FIN bit to arrive before responding with the SYNACK that acknowledges all of the data. The server then proceeds to send the multi-packet response if required.

    localhost.2123 > localhost.8888: S 2184275328:2184278860(3532) win 14128 <mss 3544,nop,nop,cc 5> (DF)
    
    localhost.2123 > localhost.8888: FP 2184278861:2184279329(468) win 14128 <nop,nop,cc 5>: (DF)
    
    localhost.8888 > localhost.2123: S 1279030185:1279030185(0) ack 2184278861 win 14096 <mss 3544,nop,nop,cc 6,nop,nop,ccecho 5>
    
    localhost.2123 > localhost.8888: F 469:469(0) ack 1 win 14128 <nop,nop,cc 5> (DF)
    
    localhost.8888 > localhost.2123: . ack 470 win 13627 <nop,nop,cc 6> (DF)
    
    localhost.8888 > localhost.2123: FP 1:31(30) ack 470 win 13627 <nop,nop,cc 6> (DF)
    
    localhost.2123 > localhost.8888: . ack 32 win 14128 <nop,nop,cc 5> (DF)
    

    3.1.5 Backward Compatibility

    Because T/TCP is a superset of TCP, it must be able to communicate seamlessly with other hosts not running T/TCP.

    There are a couple of different scenarios in this situation. Some implementations hold the data in the SYN until the 3-way handshake has passed. In this situation the client only needs to resend the FIN segment to let the server know that all the data has been sent. The server then responds with normal TCP semantics.

    In other implementations, the SYN segment is dumped once it has been processed, including the data sent in the initial SYN. The server sends a SYNACK acknowledging only the SYN sent. The client times out after a period and resends the data and FIN. The server then proceeds as normal.

    When testing the implementation for backward compatibility, I found an unusual feature (bug?) of Linux. When a SYN is sent with the FIN bit set, the Linux host responds with the SYNACK segment but also with the FIN bit turned on. This causes the client to mistakenly believe the server has sent the reply back to the client.

    This problem was traced to the way Linux constructs its SYNACK segment. It copies the header of the original SYN (and so all the flags), then sets all the flags except the FIN flag. This results in the Linux host sending a FIN without knowing it. I pointed this out to the developers of the Linux kernel. Their reasoning was that T/TCP leaves hosts open to a SYN flood attack and as such should not be allowed into main stream protocols. As it turned out, it was only a small check that was needed to solve this problem.

    elendil.ul.ie.2127 > skynet.csn.ul.ie.http: SFP 520369398:520369417(19) win 15928 <mss 1460,nop,nop,ccnew 7> (DF)
    
    skynet.csn.ul.ie.http > elendil.ul.ie.2127: SF 2735307581:2735307581(0) ack 520369399 win 32736 <mss 1460>
    
    elendil.ul.ie.2127 > skynet.csn.ul.ie.http:  F 20:20(0) ack 1 win 15928 (DF)<\n>
    
    skynet.csn.ul.ie.http > elendil.ul.ie.2127: . ack 1 win 32736 (DF)
    
    elendil.ul.ie.2127 > skynet.csn.ul.ie.http: FP 520369399:520369418(19) win 15928 <mss 1460,nop,nop,ccnew 7> (DF)<\n>
    
    skynet.csn.ul.ie.http > elendil.ul.ie.2127: . ack 21 win 32716 (DF)
    
    skynet.csn.ul.ie.http > elendil.ul.ie.2127: P 1:242(241) ack 21 win 32736 (DF)
    
    skynet.csn.ul.ie.http > elendil.ul.ie.2127: F 242:242(0) ack 21 win 32736
    
    elendil.ul.ie.2127 > skynet.csn.ul.ie.http: . ack 243 win 15928 (DF)
    

    3.2 Performance Analysis

    To investigate the performance of T/TCP in comparison to the original TCP/IP, I compiled a number of executables that returned different sized data to the client. The two hosts involved were elendil.ul.ie (running Linux) and devilwood.ece.ul.ie (running FreeBSD 2.2.5). The tests were performed for 10 different response sizes to vary the number of segments required to return the full response. Each request was sent 50 times and the results averaged. The maximum segment size in each case is 1460 bytes.

    The metric measured used for performance evaluation was the average number of segments per transaction. I used Tcpdump to examine the packets exchanged. Note that Tcpdump is not entirely accurate. During fast packet exchanges, it tends to drop some packets to keep up. This accounts for some discrepancies in the results.

    3.2.1 Number of Packets per Transaction

    figure

    Figure 4. Number of Segments versus Size of Data Transfer

    Figure 4 shows the testing results for the number of segments for T/TCP versus number of segments for normal TCP/IP. It is immediately obvious that there is a saving of an average five packets. These five packets are accounted for in the 3-way handshake and the packets sent to close a connection. Lost packets and retransmissions cause discrepancies in the path of the graph.

    When using a TCP client and a T/TCP server, there is still a saving of one segment. A normal TCP transaction requires nine segments, but because the server was using T/TCP, the FIN segment was piggybacked on the final data segment, reducing the number of segments by one. Thus, a reduction in segments results even if just one side is T/TCP aware.

    figure

    Figure 5. Percentage Savings per Size of Data Transfer

    Figure 5 shows the percentage savings for the different packet sizes. The number of packets saved remains fairly constant, but because the number of packets being exchanged increases, the overall savings decreases. This indicates T/TCP is more beneficial to small data exchanges. These test results were obtained from two hosts on the same intranet. For comparison purposes, the tests were repeated for a host on the Internet; www.elite.net was chosen as the host. Requests were sent to the web server for similar sized data. Figure 6 shows these results. This graph is not as smooth as the graph seen in Figure 4 due to a higher percentage of packets being lost and retransmitted.

    figure

    Figure 6. Number of Segments versus Size of Data Transfer for Internet Host

    3.3 Memory Issues

    The main memory drain in the implementation is in the routing table. In Linux, for every computer that the host comes into contact with, an entry for the foreign host is made in the routing table. This applies to a direct connection or along a multi-hop route. This routing table is accessed through the rtable structure. The implementation of T/TCP adds in two new fields to this structure, CCrecv and CCsent.

    The entire size of this structure is 56 bytes. This isn't a major memory hog on a small stand-alone host. On a busy server though, where the host communicates with perhaps thousands of other hosts an hour, it can be a major strain on memory. Linux has a mechanism where a route that is no longer in use can be removed from memory. A check is run periodically to clean out unused routes and those that have been idle for a time.

    The problem here is the routing table holds the TAO cache. Thus, any time a route containing the last CC value from a host is deleted, the local host has to re-initiate the 3-way handshake with a CCnew segment.

    A separate cache can be created to hold the TAO values, but the route table is the handiest solution. Also, a check can be added when cleaning out the routing entries for a CC value other than zero (undefined). In this case, the route could either be left for a longer time span or permanently.

    The benefits of leaving the routing entries up permanently are clear. The most likely use of this option would be a situation where a host only talks to a certain set of foreign hosts and denies access to unknown hosts. In this case, it is advantageous to keep a permanent record in memory so that the 3-way handshake can be bypassed more often.

    3.4 Protocol Analysis

    The original protocol specification (RFC1644) labeled T/TCP as an experimental protocol. Since the RFC was published no updates have been made to the protocol to fix some of the problems. The benefits are obvious compared to the original TCP protocol, but is it a case of the disadvantages out-weighing the advantages?

    One of the more serious problems with T/TCP is that it opens the host to certain denial-of-service attacks. SYN flooding (see http://www.sun.ch/SunService/technology/bulletin/bulletin963.html for more information) is the term given to a form of denial of service attack where the attacker continually sends SYN packets to a host. The host creates a sock structure for each of the SYNs, thus reducing the number of sock structures that can be made available to legitimate users. This can eventually result in the host crashing if enough memory is used up. SYN cookies were implemented in the Linux kernel to combat this attack. It involves sending a cookie to the sender to verify the connection is valid. SYN cookies cause problems with T/TCP as no TCP options are sent in the cookie and any data arriving in the initial SYN can't be used immediately. The CC option in T/TCP does provide some protection on its own, but it is not secure enough.

    Another serious problem was discovered during research was that attackers can by-pass rlogin authentication. An attacker creates a packet with a false IP address in it, one that is known to the destination host. When the packet is sent, the CC options allow the packet to be accepted immediately, and the data passed on. The destination host then sends a SYNACK to the original IP address. When this SYNACK arrives, the original host sends a reset, as it is not in a SYN-SENT state. This happens too late, as the command will already have been executed on the destination host. Any protocol that uses an IP address as authentication is open to this sort of attack. (See http://geek-girl.com/bugtraq/1998_2/0020.html.) There are methods of avoiding this security hole.

    Kerberos is a third-party authentication protocol but requires the presence of a certification authority and an increase in the number of packets transferred. The IP layer has security and authentication built into it. With the new IP version being standardized, IPv6, the authentication of IP packets will be possible without third-party intervention. This is accomplished through the use of an authentication header that provides integrity and authentication without confidentiality.

    RFC1644 also has a duplicate transaction problem. This can be serious for non-idempotent applications (repeat transactions are very undesirable). Requesting time from a timeserver can be considered idempotent because there is no adverse effect results on either the client or the server if the transaction is repeated. In the case of a banking system however, if an account transaction were repeated accidentally, the owner would either gain or lose twice as much as anticipated. This error can occur in T/TCP if a request is sent to a server and the server processes the transaction, but before it sends back an acknowledgment the process crashes. The client side times out and retransmits the request, if the server process recovers in time, it can repeat the same transaction. This problem occurs because the data in a SYN can be immediately passed onto the process, rather then in TCP where the 3-way handshake has to be completed before data can be used. The use of two-phase commits and transaction logging can keep this problem from occurring.

    3.5 Summary

    This chapter illustrates the required functionality of T/TCP for Linux. It also displays the advantages in speed and efficiency T/TCP has over normal TCP.

    T/TCP admittedly has some serious problems, but these problems are not relevant to all situations. Where hosts have some form of protection (other than pure T/TCP semantics) and basic security precautions are taken, T/TCP can be used without any worries.

    4. Case Study: T/TCP Performance over Suggested HTTP Improvements

    With the World Wide Web being the prime example of a client-server transaction processing nowadays, this section will focus on the benefits of T/TCP to the performance of the Web.

    Currently, the HTTP protocol sits in the application layer of the TCP/IP reference model. It uses the TCP protocol to carry out all its operations, UDP being too unreliable. There is a lot of latency involved in the transfer of information, the 3-way handshake and the explicit shutdown exchanges being the examples. Using the criteria specified in section 2.1 it is apparent that the World Wide Web's operation is one of transactions.

    4.1 Web Document Characteristics

    In a survey of 2.6 million web documents searched by the Inktomi web crawler search engine (see: http://inktomi.berkeley.edu) it was found that the mean document size on the world wide web was 4.4KB, the median size was 2.0KB and the maximum size that was encountered was 1.6MB.

    Referring to figure 3.2 it can be seen that the lower the segment size, the better the performance of T/TCP over normal TCP/IP. With a mean document size of 4.4KB, this results in an average saving of just over 55% in the number of packets. When taking the median size into account, there is a saving of approximately 60%.

    Time-wise there will be an improvement in speed, depending of course on the reliability of the network.

    4.2 Suggested Performance Improvements for HTTP

    There have been a number of suggestions put forward to improve the operation of HTTP and reduce the time and bandwidth required downloading information. Most of these suggestions have as their basis compression and/or delta encoding.

    4.2.1 Compression

    At the moment, all web pages are transferred in plaintext form, requiring little work from either the server side or the client side to display the pages.

    In order to introduce compression into the HTTP protocol, a number of issues would have to be resolved.

    First and foremost would be the issue of backward compatibility, with the web having reached so far across the world, switching to compression would take a long time. Browsers need to be programmed to handle compressed web pages and web servers also need to be configured to compress the information requested before sending it onto the user. It would be a straightforward task for the IETF (Internet Engineering Task Force) to introduce a compression standard. It would then be up to the vendors and application writers to modify the browsers and servers for the new standard.

    Another issue would be the load placed on the server when it is requested to compress the information. Many busy servers would not have the power to handle the extra workload. This holds to a lesser extent on the client side, with a minimal overhead involved in decompressing a few pages at a time.

    In their paper ôNetwork Performance Effects of HTTP/1.1, CSSI and PNGö, the authors investigated the effect of introducing compression to the HTTP protocol. They found that the compression resulted in a 64% saving in the speed of downloading with a 68% decrease in the number of packets required. Over normal TCP/IP, this brings the packet exchanges and size of data down to the level where T/TCP becomes beneficial. Thus a strategy involving both compression and T/TCP can result in enormous savings in time and bandwidth.

    4.2.2 Delta Encoding

    In this situation, a delta refers to the difference between two files. On UNIX systems, the diff command can be used to generate the delta between two files. Using the changed file, and the delta, the original file can be regenerated again, and vice-versa.

    For delta encoding on the web, the client initially requests a document and the complete document is downloaded. This will result in about a 55% benefit if using T/TCP and taking into account the mean size of a document. Once the client has the page, it can be cached and stored indeterminately. When the client requests the document the next time, the browser will already have the original document cached. Using delta encoding, the browser would present the web server with the last date the cached document was modified. The server determines if the document has been updated since the cached copy was stored, and if so, a delta of the server side document is created. The delta, rather than the original document are transferred.

    Of course, there are quite a few difficulties that need to be considered.

    1. The client needs to retain a cached copy of the document. This is not so much a hassle with more modern browsers, as this is already done. In fact the HTTP protocol defines a command that can be used to request the last modified date from a document on a server. This is then compared to the cached document and a decision made whether to download the new file, or display the original.
    2. From the server side, multiple versions of the document have to be cached to allow the server to create deltas. A decision has to be made of how many changed versions are allowed. Should the older versions be kept in the user side, or should a separate database of old versions be kept? A more detailed study of the impact of caching documents can be found in Braun & Claffy's book (see Resources).
    3. In the case where there have been a number of updates to the server side document since the client side was cached, it should be decided how many updates are allowed before the new document is sent, as opposed to sending a delta. The more changes applied to a document, the larger a delta is, hence, a loss in the savings by using delta encoding.
    4. Again there is the question of the load placed on the server by generating a delta for each document requested, similar to the compression method.
    Mogul, et al. (see Resources) investigated the effect that delta encoding has on the web. In their testing, they not only used delta encoding; they also compressed the delta generated to further reduce the amount of information transferred. They discovered that using the ôvdeltaö delta generator and compression they could achieve up to 83% savings in the transmission of data.

    If this method was used with T/TCP, there could be as much as a further 66% saving in packets transferred. This is a total of 94% reduction in packet transfer.

    It should be noted however that this is a best case scenario. In this situation, the document will already have been cached on both the server and the client side, and the client and server will previously have completed the 3-way handshake in order to facilitate the TAO tests.

    4.2.3 Persistent HTTP

    RFC2068 describes a modification to HTTP that maintains a continuous connection to an HTTP server for multiple requests, P-HTTP. This removes the inefficiency of continually reconnecting to a web server to download multiple images from the same page. The constant connection and reconnection results in a lot of unnecessary overhead.

    Some advantages over the original HTTP protocol are:

    1. Opening and closing fewer TCP connections save CPU time and memory.
    2. Multiple HTTP requests and responses can be sent without waiting for a response that would otherwise be necessary when opening and closing multiple connections.
    3. Network congestion is reduced since there are fewer packets.
    This technique is one step away from T/TCP. Instead of using transactions, it uses persistent connections much like the TELNET protocol. In this situation T/TCP would not be of much benefit, the connection will remain open for a length of time, with multiple requests being exchanged. This violates the transaction characteristics discussed in section 2.1.

    4.3 Summary

    Using the results obtained in section 3 and the characteristics of documents available on the World Wide Web, a study is presented on how T/TCP can benefit, or otherwise, some of the suggestions for improving the HTTP protocol.

    The main case for the introduction of compression and delta encoding is the reduction in the size of the data that needs to be transferred. The results obtained from the performance analysis of T/TCP suggest that a greater benefit be obtained on small data transfers. The compression and delta encoding ideas result in data small enough that can be sent in one packet. Under these conditions, T/TCP operates best.

    P-HTTP puts forward the idea that a connection should be semi-permanent, unlike the current open-close operation HTTP currently employs. In this scenario, T/TCP will not work at all because of its transaction-oriented style.

    5. Socket Programming Under T/TCP

    Programming for T/TCP is slightly different using socket programming.

    As an example, the chain of system calls to implement a TCP client would be as follows:

    Whereas with T/TCP the chain of commands would be:

    The sendto function has to be able to use a new flag MSG_EOF, to indicate to the kernel that it has no more data to send on this connection. This is the transaction-processing coming into effect.

    Programming under T/TCP is much like programming under UDP.

    6. Conclusion

    T/TCP was originally designed to address the need for a more efficient protocol for transaction style applications. The original protocols defined in the TCP/IP reference model were either too verbose or not reliable enough.

    T/TCP works by building on the TCP protocol and introducing a number of new options that allow the 3-way handshake to be bypassed in certain situations. When this occurs, the transaction can almost realize the minimum number of segments that are required for a data transfer. T/TCP can reduce the average number of segments involved in a transaction from 9 (TCP) to 3 using the TAO test. This has potential benefits to overloaded networks where there is a need to introduce a more efficient protocol.

    Analysis of T/TCP shows that it benefits small transaction-oriented transfers more than large-scale information transfer. Aspects of transactions can be seen in such cases as the World Wide Web, Remote Procedure Calls and DNS. These applications can benefit from the use of T/TCP in efficiency and speed. T/TCP reduces on average both the numbers of segments involved in a transaction and the time taken.

    As T/TCP is still an experimental protocol, there are problems that need to be addressed. Security problems encountered include the vulnerability to SYN flood attacks and rlogin authentication bypassing. Operational problems include the possibility of duplicate transactions occurring. Problems that occur less frequently would be the wrapping of the CC values on high-speed connections and thus opening up a destination host to accepting segments on the wrong connection.

    Many people recognize the need for a protocol that favors transaction style processing and are willing to accept T/TCP as the answer. The security considerations lead to the conclusion that T/TCP would be more useful in a controlled environment, one where there is little danger from a would-be attacker who can exploit the weaknesses of the standard. Examples of enclosed environments would be company Intranets and networks protected by firewalls. With a lot of companies seeing the web as the future of doing business, internal and external, a system employing T/TCP and some of the improvements to HTTP, such as compression and delta encoding, would result in a dramatic improvement in speed within a company Intranet.

    Where programmers are willing to accept T/TCP as a solution to their applications, there are only minor modifications needed for the application to become T/TCP aware. For client side programming, it involves the elimination of the connect() and shutdown() function calls, which can be replaced by adding the MSG_EOF flag to the sendto() command. Server side modifications involve simply adding the MSG_EOF flag to the send() function.

    In conclusion, researches into T/TCP suggest that it is a protocol that is nearly, but not quite, ready to take over transaction processing for general usage. For T/TCP alone, more work needs to be done to develop it further and solve the security and operational problems. Security problems can be solved using other authentication protocols such as Kerberos and the authentication facilities of IPv6. Operational problems can be dealt with using greater transaction reliability built into the applications that will use T/TCP, such as two phase commits and transaction logs.

    Future work in this area could involve the promotion of T/TCP as an alternative to the TCP and UDP protocols for certain applications. T/TCP has been slow to take off. FreeBSD is the most widespread implementation of T/TCP for PC computers. Now that Linux is T/TCP aware, it can push the use of the protocol more. Applications can be easily modified to use T/TCP when available, any applications that involve an open-close connection can use T/TCP efficiently, and the more prominent examples would be web browsers, web servers and DNS client-server applications. To a smaller extent, applications such as time, finger and whois daemons can benefit from T/TCP as well. There are many networking utilities available that can take advantage of the efficiency of the protocol, all that is needed is the incentive to do it. Perhaps a more immediate task though, is to port the T/TCP code to the new Linux kernel series, 2.1.x.

    Resources

    Braun H W, Claffy K C, "Web Traffic Characterization: An Assessment of the Impact of Caching Documents from NCSA's Web Server", Proceedings of the Second World Wide Web Conference '94: Mosaic and the Web, October 1994

    Mogul J C, Douglis F, Feldmann A, Krishnamurthy B, "Potential Benefits of Delta Encoding and Data Compression for HTTP", ACM SIGCOMM, September 1997

    Prud'Hommeaux E, Lie H W, Lilley C, "Network Performance Effects of HTTP/1.1, CSSI and PNG", ACM SIGCOMM, September 1997

    Stevens W R, TCP/IP Illustrated, Volume 3, TCP for Transactions, HTTP, NNTP, and the UNIX Domain Protocols, Addison-Wesley, 1996


    Copyright © 1999, Mark Stacey
    Published in Issue 47 of Linux Gazette, November 1999

    "Linux Gazette...making Linux just a little more fun!"


    Teaching web site construction with Linux

    By Alan Ward


    Abstract

    This article is mainly for teachers who wish to do some web site construction, though it may be of some interest to others. It is based on my personal experience these last four years. Each year I end up doing things in a different way, and this is intended as a summary of actual practice.


    Introduction

    The main questions nowadays when teaching web construction are:

    As for the environment, I mean on which server (as for software) shall our site be placed? The main choice is between a Microsoft server running under Windows NT (professional) or Windows 9X (local intranet), or on the other hand a Unix server. In this latter case, Apache seems to hold a large part of the market, though it is by no means a monopoly :-).

    This is an important question as each server has its own capabilities and quirks.

    The HTML editor question depends to a certain extent on our response to the first question. If we are developing for a Microsoft server, it makes sense to write our pages with MS Frontpage (complete or Express). The same goes for a Netscape server and Netscape Communicator. With a Unix/Linux, the debate can be more extensive.

    You will notice that I have no particular tendency towards or away from Microsoft products. I am sure that Internet is large enough to find people working on any combination of hardware and software -- just as well! I personally develop with a Linux+Apache server and a Windows+iExplorer+Netscape+HotJava client.

    Naturally, our answers to these questions depend both on personal choice and on the end result we want to produce. To analyse these, several factors can be taken into account, that I will formulate as questions.


    The server environment

    The first factor is: "Do we want to produce something (i.e. a real web site) as a conclusion to our project?" The answer to this is almost always yes. Then, we must see where we will have it. Shall the web site be on a local network (intranet), or must it go global (internet)? Shall it start life locally (eg. for development and testing) -- and hope to go global later on (when complete)? If this is so, special care must be taken to use the same kind of server on both.

    An example: say you develop locally with a MS Personal Web Server, and when finished send your site to your favourite ISP -- who runs Unix+Apache. On PWS, it is easy to write "\" for "/" to separate subdirectories, and PWS works fine. Apache does not, holding to the Unix convention (makes sense, right?). A stupid mistake, but it happened to me.

    So this is where Linux steps in. The Apache server works in exactly the same fashion on your local 486 Linux box and on your ISP's Sun/Solaris. So you have a fair certainty that if it works for you, it will also work on the Web. This point is particularly important when teaching kids: if what they do doesn't work because they messed up, that's OK. If it doesn't work because of a "technical problem", uh-oh. :-(

    All the more reason to know your ISP. Mine is one student's parent -- so anything I do in class goes back home for checking! Keeps me on my feet.

    Another point that can be made -- and that I had pointed out to me by students -- is the use of an FTP client. Web maintenance is something I like to speak about in my classes. However most maintenance is done nowadays through the use of FTP to upload pages to the server. As you may know, Microsoft Frontpage uses the local network to upload pages directly on a local server, while a Linux+Apache server almost always has a FTP server running, just like a "big" Unix box.

    So one can practice locally the moves to upload pages to the server, before going online. You can also address matters such as home directories on the server, versus ".public" directories.

    My experience so far is that my classes on Unix/Linux help students understand better the intricacies of web servers, while my classes on web maintenance help them find real-life, direct applications for their Unix knowledge. I guess it is important to give a complete and coherent picture.


    The HTML editor

    So we have our local server chosen, and going for developing. Now, the question is: "Which editor do I use?"

    I like to start out with a plain-text editor, so they can get the feel of pure HTML coding, before going on to something more sofisticated. If they can write HTML, they will soon learn to use an advanced editor such as Frontpage, and -- perhaps more importantly -- be able to correct the editor's output. The reverse is not always true.

    Under Windows, I use NotePad. Mainly to escape from issues related to different file formats that can be a pain with Write, Works or Word. Under Linux, I use whichever is convenient (gedit, kedit, vi ...). I am writing this article with my favourite: Emacs with HTML mode enabled.

    When going on to a more advanced editor, Netscape Composer is a choice I will work with this course. This is because it is available on many platforms: my students have at home Intel boxes under Windows and Linux (my fault!), and also Macintoshes. I have been unable to find as many versions of other editors such as MS Frontpage Express or Hotmetal. The complete MS Frontpage I leave alone, as it doesn't seem to interface well with Unix servers.

    Even further on, it is worth to examinate the possibilities of MS Word or Publisher. OK, I know they both produce really ghastly HTML code! But they do produce this code -- from existing documents -- with relative ease. I gave a course on Frontpage this summer, and ended up realizing that most students (they were in fact fellow teachers undergoing "formation") would use Word to produce their pages. The fact was they all wanted to publish texts -- they had written in Word -- on the Web. Now, when I get my hands on StarOffice, I may have to revise this judgement.


    Debugging

    A further factor is: "Who's going to read me, and with which browser?" As you may know, HTML browsers may produce a quite different output from a same page. And to paraphrase Moore's Law: if there exists a weird browser, someone out there is sure to use it.

    The only way to ensure that what we've produced is more or less universally accepted is to debug: ie. try out our site locally on different browsers. Ideally, it would be nice to:

    This can also be a practical way of comparing OS in the classroom, so that students can see the diversity of OS and browsers available. Not a bad way of introducing Linux to students who are yet in the Windows stage.

    You can also see what works with each. Some problems can come from:

    One last point that deserves attention and debugging is (for us Latin charset users) accentuation. For example, I test my pages with both a Spanish and a French Windows (accents have different ASCII codes with these charsets).


    Conclusion and previsions

    Although the final impression, after these four years, is to a certain extent one of confusion, at least I know why this is so. On Internet, many different hardware and software (and meatware) setups coexist. In fact, it is one of the only ways many people do get in touch with this diversity.

    So, from a teacher's point of view, either one can close one's eyes and bury one's head in the sand (see no evil, hear no evil), or one can face this diversity -- and pass it on to the students (say no evil!). Is it good to address such diversity directly, with the consequent danger of muddling things up? I can only answer speaking from my personal situation: I feel my kids (17-18 year-olds) have enough experience as users of Internet so that this diversity has been creeping gradually up into their conscience. So if I straighten out the questions that do arise, that can't be too bad.

    A similar field I would like to work in this course is web programming, both server-side (CGI) and client-side (Java). Here, the Apache server once more gives me what I need to develop CGI with both C and Perl, unlike Microsoft. A point to consider is that my ISP allows me to include Perl scripts in my page, but not programs in C (that he would have to recompile).

    On the other hand, the Java Development Kit (JDK) is available for downloading from Sun (java.sun.com) both for Linux and for Windows. Another thing I also liked a lot is their habit of giving many examples. Is there a better way of learning to program?

    [Editor's note: The author's previous article in LG #45, Sharing a Linux server under X in the classroom, is now available in Hungarian as Linux szerver megosztása a tanteremben X-Window segítségével. -Ed.]


    Copyright © 1999, Alan Ward
    Published in Issue 47 of Linux Gazette, November 1999


    The Back Page


    About This Month's Authors


    Steven Adler

    While not building detectors in search of the quark gluon plasma, Steve Adler spends his time either 4 wheeling around the lab grounds or writing articles about the people behind the open source movement.

    Larry Ayers

    Larry Ayers lives with his family on a small farm in Northeast Missouri; he is a woodworker, fiddler and general jack-of-all-trades. He can be reached at layers@vax2.rainis.net.

    Eugene Blanchard

    Eugene is an Instructor at the Southern Alberta Institute of Technology in Calgary, Alberta, Canada where he teaches electronics, digital, microprocessors, data communications, and operating systems/networking in the Novell, Windows and Unix worlds. When he is not spending quality time with his wonderful wife and 18 month old daughter watching Barney videos, he can be found in front of his Linux box. His hobbies are hiking, backpacking, bicycling and chess.

    Pedro Paulo Ferreira Bueno and Antonio Pires de Castro Junior

    Pedro is a Science Computer Student from Catholic University of Goiás (UCG- Brazil) and the manager of LinuxGO, the Goiás Linux User Group and the network card moderator at Linux Knowledge Base. He is a maniac linux user since he started with Linux in Kernel 2.0.7. When he is not in front of his linux machine he is probability playing soccer. He can be reached at pedro.bueno@persogo.com.br.

    Antonio is a masters degree student at UNICAMP. He is co-founder of LinuxGO and his favorite research topic is Network Communication. He can be reached at: apcastro@dcc.unicamp.br

    Jim Dennis

    Jim is the proprietor of Starshine Technical Services and is now working for LinuxCare. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/Peter Norton Group and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

    Alex Heizer

    Alex is a computer enthusiast living in New Jersey. He began on UNIX mainframes at his father's company as a child in the '70s and has worked on most varieties of desktop computers. Although most of his experience in the past 10 years has been on Microsoft (yecch!), his first ISP in 1994 ran Linux, and he has used and advocated Linux exclusively for more than a year.

    Peter Lukas

    Peter currently works as a Security Engineer for a large Midwestern Internet Service Provider. When he's not fighting crime on-line, he enjoys writing and improving his golf game. He can be reached by sending mail to peter@math.umn.edu.

    Vladimir Makarov

    Vladimir has been a member of GCC team of Cygnus since March 1998. He has worked in the compiler field since 1980. He has been a Linux user since 1993.

    Bill Mote

    Bill is the Technical Support Services manager for a multi-billion dollar publishing company and is responsible for providing 1st and 2nd level support services to their 500+ roadwarrior sales force as well as their 3,500 workstation and laptop users. He was introduced to Linux by a good friend in 1996 and thought Slackware was the end-all-be-all of the OS world ... until he found Mandrake in early 1999. Since then he's used his documentation skills to help those new to Linux find their way.

    Mark Nielsen

    Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.

    Jesper Pedersen

    Jesper lives in Odense, Denmark. He is the author of the book "Sams Teach Yourself Emacs in 24 Hours", the program "The Dotfile Generator", the Emacs package "Power Macros", and is the chairman of the Linux User Group on Funen in Denmark. In his spare time, he enjoys drinking wine and listening to music (esp. Depeche Mode) with his girlfriend Anne Helene, and walking in the nature. For more information on Jesper, the Emacs book, The Dotfile Generator or Power Macros, please visit www.imada.sdu.dk/~blackie/.

    JC Pollman

    I have been playing with linux since kernel 1.0.59. I spend way too much time at the keyboard and even let my day job - the military - interfere once in a while. My biggest concern about linux is the lack of documentation for the intermediate user. There is already too much beginner's stuff, and the professional material is often beyond the new enthusiast.

    Bob Reid

    Rob is doing his Ph.D. in Astronomy at the University of Toronto, where he was a system administrator on the side for a while along with running his own Linux boxes at home and school since 1995.

    Anderson Silva

    Anderson is a Senior at Liberty University majoring in Computer Science. Originally from Brazil, now he works at the University's Information Technology Center. He is also a member of the Lynchburg Linux User Group in Lynchburg, Virginia.

    Slambo

    I've been playing with PCs since the early 80s, and got a hold of Linux about 2 years ago. For the last 7 years I have provided end-user computer support, and written documentation mainly for other support reps. In the last year I have begun writing for publication, including articles in Linux Gazette and contributing some chapters for "Special Edition Using KDE" (Que Publishing, due out November 99). I am a member of the Madison Linux User Group and the Open Source Writers Group. When I'm not working or playing on my computer, I am building and operating model railroads and attending meets of the South Central Wisconsin Division NMRA and the Capitol City "N"Gineers. I can be reached via email at slambo@linuxstart.com.

    Mark Stacey

    Mark Stacey <Mark.Stacey@icl.ie> graduated from the University of Limerick, Ireland, in 1998 with a first class honors degree in Computer Engineering. His interests include Java programming and Web development. He currently works for ICL in the Information Technology Center based in Dublin, Ireland.

    Alan Ward

    "Alan teaches CS in Andorra at highschool and university levels. He's back to Unix this year after an 8-year forced interlude since he graduated -- it makes networking so much easier. His hobbies include science photography (both digital and traditional), trekking, rock and processor collecting.


    Not Linux


    [ Penguin reading the Linux Gazette ]

    This month's Gazette is again chock-full of articles --- 19 of them, not counting the regular columns. Way to go, authors! New features this month include the subject index for the Answer Guy, and a new series. Slambo has begun a series of web site reviews called "LSOTM (Linux Site O' The Month)".

    Professor A Cartelli wrote in about his Italian translation of the Gazette, so I took the opportunity to ask how long it takes to do the translation and how many people are involved. He said, "Too much time, not enough people. We are usually one month late after your magazine. It usually takes 6-10 people."

    The Gazette received 389 letters this month. Of these, 104 were spam. The Linux Gazette Spam Count for November is therefore 26%, down 2% from last month.

    Here are excerpts from the more hilarious ads:

    This edition of the Linux Gazette was brought to you by the Dropkick Murphys, Anti-Flag, H20, and Punk-O-Rama 4, which were playing in my walkman continuously as I formatted the columns and the Table of Contents.


    Linux Gazette Issue 47, September 1999, http://www.linuxgazette.com
    This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
    Copyright © 1999 Specialized Systems Consultants, Inc.