How all this works
By Reno Bailey
When we first started this website this past February, I was explaining it to Mama, and she asked me, “When is it on?”
I said, “Mama, it’s on all the time.”
“Well,” she asked, “how long is it?”
“It’s as long as it takes to read the articles and look at the pictures,” I explained. “It’s like a library. You just go there and look at and read whatever you like, and leave whenever you like.”
Mama has a better understanding of it now, since Guy and Helen Hamrick were kind enough to invite her over to their house and take her on a tour of “Remember Cliffside.”
I got to thinking. What do we really know about this new-fangled way of doing and learning things? I’ll tell you all I know, which isn’t much. Maybe in the telling, I’ll understand it too.
While we slept, they’ve wired our country and much of the world, connecting computers everywhere by constructing vast networks, over telephone lines, fiber optic cable and satellites.
Here while back, you went down to Circuit City or Office Depot and bought yourself a computer. Then you found a company (or it found you) to hook you up to the internet. These companies are called Internet Service Providers (ISPs). There are hundreds of them. The big boys are AOL, Time-Warner, AT&T, Earthlink, and others. And there are smaller, more localized ones like Blue Ridge and RCFI, to name two in Rutherford County.
To better understand what a website—such as RememberCliffside.com—is, think of it in terms of those mini-storage businesses that have sprung up like weeds in every town, big and small. You sign up with a company for a proscribed space, store your stuff, and pay a certain amount per month. The storage buildings in this context are called servers, or high-capacity, high-speed computers. The larger data storage companies have server “farms,” where great numbers of computers are concentrated.
One of these space providers is Yahoo! Geocities, the one we chose to hold the “stuff” of Remember Cliffside. Their servers, at least the one we’re using, happen to be in Sunnyvale, California (in “Silicon Valley,” just south of San Francisco). We contracted for 25 megabytes of space, with a minimal amount of traffic, or daily visitors to the site. Currently, we have about 140 visitors on any given day. If that number shot up to, say, 5000 a day, Yahoo! would quickly come knocking on our door, demanding more money.
So, you go to your computer, start your internet browser, and click the Remember Cliffside link in your Favorites list. What happens then? Your ISP starts a process that is, essentially, a journey. It connects you to your destination (the server) by the quickest route possible. The server responds, sending your requested information back to you, again on the quickest route.
Actually your ISP puts your request on a network, which will be switched to another network, then another and another. It’s like shipping something by train: Your shipment starts out on the Seaboard line, but in New Orleans it’s put on the Santa Fe line, then, in Cheyenne, it’s moved to the Union Pacific. Some of these “lines” (or “networks” in the internet context) are named AOL Transit Data Network, Qwest, Colorado Super Net, Road Runner Net, Maxim, etc. There are dozens, perhaps hundreds of them. The cities in this metaphor are like hubs, where one network turns the data over to another.
Now, within a network there are connection points called “nodes” which work like relays. Equate this scheme to that Seaboard train running to New Orleans. First it goes to Spartanburg, S.C., where your shipment is handed off to another train, which takes it as far as Atlanta, where it’s switched to a Birmingham-bound train, and so forth. These nodes are also computers (special servers, in fact, called “routers”), located in computer centers along the way, many on college campuses. Their purpose is to recalculate the route, based on traffic patterns, and maintain the integrity of the data throughout the journey.
An average long-distance request, say from Charlotte (where I live) to Sunnyvale, will go through a couple of hubs, several networks and many nodes. When I visit the Remember Cliffside site, the request usually travels through about 20 nodes, hitting Atlanta and perhaps Dallas-Ft. Worth (important hub sites), then directly to San Jose, California, where it’s then bounced next door to Sunnyvale. How do I know this? There are software programs one can use to visually plot the route of a request from your computer to the server. And it gives you, in milliseconds, the time it takes to travel between nodes. (You can examine each node’s information, learning its exact location and which network it is serving.) But the exact route of a given request depends on the current volume of traffic. During heavy traffic periods, the request might jump up to Chicago, then down to Houston, or any which way. (See route map.) For example, sometimes, when you access amazon.com, if their servers in Seattle are handling heavy traffic, they will bounce your request all the way back across the continent to Montreal!
What is a request?
When you submit a request, what are you requesting? You’re simply asking the server to show you the “home” (or front) page of a particular website. (A page and each image on it [buttons, pictures, etc.] are actually small, separate computer files.) In response to your request, that home page file, and all other files associated with it, are sent to, and stored on, your computer. (They’re considered temporary files, but, unless you delete them, they’ll stay on your computer until they’re overwritten by a newer file with the same name.)
Once the home page (in our case that file is named “index.html”) is on your computer, you will use buttons (such as “Memories” or “Town Map”) or text links to connect to other pages on that website. To click on a link is to send a request for a specific page.
If you send a request to view, say, the “Memories” page, the server looks for its file (named memories.html) on your computer’s hard drive. If it doesn’t find it, the server will immediately send the file to you. If it does find it on your computer, it checks the file’s modification date and time. If the file on the server is newer, it will send your computer the new version.
The care and feeding of a website
Every individual web “page” that you view is a set of files. When the webmaster—yours truly—finishes creating new pages (by writing text, selecting and cropping pictures, and designing the layout using a language called “HTML”), he “pushes up” (or copies) these new files from his computer to the faraway server with a communications program that uses something called “FTP” (File Transfer Protocol). He must push up not only the HTML file for each page, but also the files for all images used by the pages. “Remember Cliffside” is comprised of well over 1,000 files, using about half our allocated 25 megabytes of space. (A megabyte is the space required to handle about one million characters.) If we should exceed the allocated space—as we are likely to do—the rent will go up.
For a more in-depth (and likely more accurate) description of web methods and issues visit the website How Things Work.
Editor’s note: Since this was written, many years ago, technological advances have rendered this description as obselete as buggy whips and corsets, and many of the companies mentioned no longer exist. As of Spring 2016, this site has about 3,500 pages, and over 4,000 images.