Friday, March 12, 2010

Scenarios

Scenario-1:


You have installed windows service pack 2 and after updating windows up to service pack 3, you are able to log in system but receiving a continuous message that it is not a genuine copy of windows. What are the solutions available to this problem in both manner legal of illegal?

Illegal

Windows Genuine Notification popped up because Windows Update again installed it via Automatic Updates. It pops up while a user logs in to windows, displays a message near the system tray and keeps on reminding you in between work that the copy of windows is not genuine. It has been reported since its first release that even genuine users are getting this prompt, so Microsoft has them self release instructions for its removal. When I searched on Google about this issue, I landed up on pages which were providing many methods of its removal including those patching up existing files with their cracked versions which I would highly recommend avoiding them as they might contain malicious code and can be used to get you into more trouble.

I found out this method of removal of Windows Genuine Notification :

1. Launch Windows Task Manager.

2. End wgatray.exe process in Task Manager.

3. Restart Windows XP in Safe Mode.

4. Delete WgaTray.exe from C:\Windows\System32.

5. Delete WgaTray.exe from C:\Windows\System32\dllcache.

6. Lauch RegEdit.

7. Browse to the following location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify

8. Delete the folder ‘WgaLogon’ and all its contents

9. Reboot Windows XP.

But the latest version of the WGN tool is a little tricky to handle. It will pop up again as soon as you end it from the task manager and while it is running in the memory, you can’t delete it too.


Illegal

Download a patch from the Internet and run it in your windows

Legal

Register your windows from Microsoft official website.


Scenario 2:

You have downloaded windows 7 from Microsoft official website in December 2009 on present day your system is rebooting after 2 hours. What are the solutions available to overcome this problem. Legal or Illegal?

Ans:

If you have a warm fuzzy feeling inside when thinking about Microsoft and their decision to let you play with their new OS for free until August next year; get ready for the kicker. From March that release candidate you are running is going to start reminding you a commercial copy of the OS needs to be purchased to continue enjoying the benefits of Windows 7 in the most intrusive way possible.

You can understand Microsoft wanting to remind users that they need to buy Windows 7, but it’s the method they have decided to employ that is going to annoy and frustrate users. From March 2010 Windows 7 RC will start automatically rebooting your PC every two hours. So, if you happen to be doing something important you’ll have to stop as the friendly “buy me!” shutdown reminder is invoked.

For the RC, bi-hourly shutdowns will begin on March 1st, 2010. You will be alerted to install a released version of Windows and your PC will shut down automatically every 2 hours. On June 1st, 2010 if you are still on the Windows 7 RC your license for the Windows 7 RC will expire and the non-genuine experience is triggered where your wallpaper is removed and “This copy of Windows is not genuine” will be displayed in the lower right corner above the taskbar.

This isn’t a new tactic Microsoft has implemented to remind users they need to upgrade and it did the same thing with Vista previews. Windows 7 is expected to release in October this year, but at the very latest will be out by January next year giving you plenty of time to buy a copy before the automatic shutdowns begin.

Windows 95

Introduction to Windows 95


Windows 95 is developed to fulfill the needs of home and small business users. Microsoft has developed a robust operating system intended to replace the limitations of Window 3.x (means 3.0, 3.1 and 3.11).

Microsoft's Windows 95 operating system is the most significant and highly publicized computer software released in August 1995. This software was released in more than 40 countries world wide simultaneously. Millions of users of DOS and Window 3.1 have already shifted to Windows 95.

Windows 95 is a GUI (Graphical User Interface) operating system. It has 2 interfaces, one is the interface between the user and the applications and other is the interface between the applications and the computer's devices and files. Windows 95 is a class of graphical software called GUI.

With the release of Windows 95, lot of people are in the favour of this OS and many are anti and they have done anti-publicity. The biggest anti-publicity came in the form of hardware requirements. Some person claim that with the change of OS (DOS to Windows 95), there may be need to change in PC's. In fact we can load Windows 95 to a new computer and even upgrade the system.

Features of Windows 95

1. Easy to learn - Windows 95 is easier to learn and use than its predecessors like DOS and Windows 3.x. It also has reliability improvements than previous windows.

2. Full-Fledged Graphical OS - The new DOS/Windows 95 combination is finally a full fledged graphical operating system having its own graphics symbol. It can run DOS, Windows 3.0 and 3.1 and some Windows- NT 32-bit applications with very fast speed and in an efficient manner.

3. Protected mode - Windows 95 and its applications run under PC's protected mode. The protected mode means irregular programs cannot utilize hte memory and other resources. If any program becomes faulty, then the whole OS and its other programs do not effected.

4. Multithreaded OS - Windows 95 is a multi threaded operating system. Multithreaded means, it can run multiple applications simultaneously more smoothly than Windows- 3.0/3.1 and it have linkage with number of applications.

5. Preemptive Multitasking OS - Windows 95 is a preemptive multitasking operating system. The preemptive multitasking means program running in the background do not effect the interactive program that are running in the foreground.

6. 32 - bit code - Major portion of Windows 95 operating system is in 32-bit code which have better compatability with Intel 80386, 80486 and pentium processors. The memory manager, scheduler and process manager are all of 32-bit in this OS.

7. Taskbar - It has a taskbar that is always accessible on screen and it also has button listing about the currently running application and helps to easily switch between them.

8. File name - In Windows 95 file can have name upto 255 characters long. It supports long file name with extension of three letter file name as used by DOS.

9. Integrated System - Windows 95 integrate virtually all computer tasks and resources like Networks, Communication(E-mail, Faxing) applications, Multimedia features, System administration and printing facility. It also support a new mail syetm called Microsoft Exchange for managing all types of messages.

10. Registry - It also stores important hardware settings, application settings and user right settings in central locaton called Registry. These settings can be stored in some files like Autoexec.bat, Config.sys, Win.ini and System.ini.

11. Disk Performance - It ensures a faster disk performance than the 16-bit file system used by Windows-3.1 and DOS.

12. Error Message - It displays an error message, if a program crashes and we can eliminate the crashed program from the task list without affecting other running applications.

13. Memory Space - Windows 95 provides much more conventional memory space for Dos applications by implementing devices such as Smartdrive, Mouse driver, Share.exe, CD-ROM ans SCSI(Small Computer System Interface) device drivers.

14. Sharing of resources - With Windows 95, you can setup the computer for use by different people, each with their own Desktop, shared resources, user rights and other settings.

15. Plug-and-Play - It also provides Plug-and-Play standard. It allows user to plug a new board such as a video, audio or network card into computer without having to set switches or make other settings.

16. RAS - It has RAS(Remote Access Services), which helps users on the platform to call into a Windows 95 Network, log on and connect just as they do for their desktop machine.

WINDOW NT SERVER INSTALLATION

WINDOWS NT 4.0 Server installation


There are several phases involved in NT server installation which are as follows:

Phase 0 Preparing for NT server installation

Phase 1 Gathering information about the computer on which you tend to install Windows NT

Phase2 Implementing Windows NT networking

Phase3 Finishing the set-up process

Now we will discuss these phases one by one to understand the complete installation process:

Phase0: Preparing for the installation of NT server

In the preparation phase of installation, NT copies sufficient files to run a limited version of NT server. Once this limited version of NT server is running the installation process is speeded up due to multi-tasking capabilities of Windows NT. During phase 0 the installation program tries to detect the hardware of your computer and more specifically it tries to detect the video adapter and the mass storage devices. After this information is found, the installation program will ask the installer on which partition the NT should be installed, which file system be used (FAT or NTFS), and the name of the directory in which the NT system files should be stored.

Phase 1: Gathering Information about the Computer selected for Installation

At this stage the information about your computer is collected by the installation program. During the phase you will be prompted to provide the following information:

(a) Name and organization: Here you have to select an appropriate and the name of your organization. It is necessary to give a user name, however the organization name is optional. This information is needed for licensing purpose only.

(b) Licensing Mode: There are two licensing modes, per server and per seat. In per server licensing mode, each server must be licensed for each concurrent connection. For example if you have 10 users and three servers and all ten users are needed to access all the three servers then each of the three servers must be licensed per server for 10 concurrent connection. In the per seat license each client is licensed at the client side and can access any number of servers in the network. If you have per seat license for 10 users then all the 10 clients can access all the three servers simultaneously and there is no need for having separate license for each server. So in a network with a multi-server environment the per client licensing is economical but for a single server small network per server licensing is advisable.

(c) Computer name: you have to give and appropriate name for computer. This name will be used for accessing the NT server computer from the network.

(d) Type of server: Here you have to specify whether your server to be installed will act as primary domain controller, Backup domain controller or a member server. If this is first server it is advisable to make it a PDC.

(e) Administrative password: you have to specify a password which will be used for logging into the system with administrative privileges.

(f) Emergency repair disk: you can create an emergency repair disk at this stage; it could be useful in fixing system problems later. ERD’s are machine specific and you have to create one for each NT machine.

(g) Optional components: Select the optional components here which you wish to install, the optional components include Accessibility options, Accessories, Communications, Games, Microsoft Exchange and Multimedia.

Phase 2: Implementing Windows NT networking

In this phase you have to supply the parameters which are related with the networking aspects of the server being installed. You have to supply answer related to following parameters:

1. Direct or Dial-in connection to the network: in this you have to specify the server to be installed which will be connected directly or through modem and telephone line. For installing NT server you will most likely connect it direct to the network.

2. Optional Installation of Internet information server: You can install the IIS if you wish to build an internet in your organization using your server else you may not choose this option.

3. Adapter and Protocols: Here you have to select the network card from the list of network cards which corresponds to the actual card installed in your system so that the installation program may install the appropriate driver for the same. In case the list does not contain the network card installed in your system then you have to choose the “Have Disk” option and then give the path of the line driver files on the floppy or CD-ROM or hard disk as the case may be( the floppy or the CD containing the LAN driver must have been supplied with the network card by the manufacturer). This process will load the appropriate LAN driver, then you have to select the appropriate protocol like TCP/IP, NetBEUI etc and see that the protocol is properly bound with the network. In case you select TCP/IP then you have to give a unique IP address and the sub-net mask for your computer in the network. (Every computer has a unique IP address consisting of four numerical fields of eight binary bits each. The Sub-net mask identifies the fields containing the network address part the node address part in the IP address.).

4. Additional Network Services: Select from such services as services for Macintosh or Gateway services for Netware. (You may not select them if you don’t want them presently, you can select the services at a later stage after installation also.).

5. Domain: Specify the name of the domain your computer should belong to. It is very important. In case your server is going to be a PDC then this is the domain your server will control. If this is the BDC or any other server then you have to give the name of the domain same as that of the PDC already existing. Please note that for the installation of a BDC the primary requirements is that the PDC must be on, running and connected to the network.

Phase 3

This is the last phase of installation program. The NT requires following information for completing the configuration:

(a) Date/Time and Time Zone: You have to give these parameters here for correct system time and time zone; else NT will pick these from the computer by default.

(b) Video Driver: After your video settings, you are asked to test your settings before they are saved. This helps in preventing from selecting wrong settings since the wrong settings may lead to unreadable display which is as good as non-functioning of the server.

After discussing the different phases of installation, now we can go on to detailed steps in installation, following which you can practically install the Windows NT server on any computer fulfilling the minimum hardware requirements as discussed earlier.



STEPS FOR INSTALLING WINDOWS NT 4.0 SERVER

(1) Boot your system to DOS or Windows 95/98.

(2) In case your computer is booted to DOS then ensure that CD driver is installed. Put the first CD in CD drive and from command prompt enter x:\i386\winnt where x is the drive letter for your CD drive. Then setup screen will appear and will ask the path where the Windows NT files are located. By default this will show the directory from where you run winnt. Press enter to continue. In case your computer is running under Windows 95/98 environment then you can give above mentioned path through Run option or can select through explorer.

(3) Next the winnt program will create three NT boot floppies( keep ready three of the high density blank floppies for this purpose) which will be used later to start the installation and provide the storage device drivers for your computer.

Be careful in nothing the order in which these floppies will be created, the order is NT server setup disk #3, setup disk #2 and setup boot up disk #1.

(4) After the floppies have been created, the NT server setup boot disk is already in the floppy drive and all you need is just press enter to reboot the system. The computer will boot to windows NT setup screen. It may take some time to load the necessary setup files.

(5) The system will then prompt you for inserting setup disk #2, insert disk #2 and press enter. Now more files will be loaded and Windows NT kernel will load.

(6) Next screen will be welcome to setup screen. We can also do NT server installation without use of any floppy. For this in step no. 2 when the setup screen appears we have to give /b option with winnt program (for example x:\i386\winnt /b) and then installation program will proceed without asking for any diskettes.

(7) NT will now attempt to detect your mass storage device. Press enter and continue.

(8) You will then be prompted for inserting diskette#3, insert this floppy and press enter.

(9) Next the installation will display a list of mass storage devices detected in the system (for a SCSII device only controller is listed). If you find this list to be correct, press enter. In case the list is not correct choose S and provide the driver for your mass storage device actually present but not listed. At this point the NT supported file system FAT and NTFS and the appropriate device driver for your computer will be installed.



(10) Next the license agreement screen will appear. In order to read the agreement press page down until you have read the agreement. Press f8 to accept the terms and conditions of the agreement. (if you don’t agree, installation will not take place).

(11) Then the installation program will display a list of hardware and software components detected on your computer. If this list is correct press enter.

(12) Next the system will ask you, on which drive you want the NT systems files to be loaded. (As an example choose c: partition and press enter).

(13) The next screen will ask you how to partition your drive. Select the option “leave your current files system intact” and press enter.

(14) Next give the path where your NT server will be installed, \winnt is the default, you may accept it and press enter.

(15) The NT server setup screen will now display a notice that setup program will now check your hard disks for corruption, press enter to continue. Some more files will be copied which may take few minutes.

(16) After the completion of file copying process you will be prompted for restarting your system. Remove any floppy from the floppy drive and press enter. The computer will restart and NT server will load. While the NT is loading the CHKDSK utility will run and automatically check the disk partations.

(17) Next the windows NT setup screen will appear, click next and this will start the phase one. (gathering information about your computer) of installation.

(18) Next you will be prompted to type in the registration CO-key. Type in this and click next.

(19) Next you will be prompted to type in the registration CD-key. Type in this and click next.

(20) Next screen is for selecting the licensing mode. Choose per server license for 10 users and click next.

(21) Next you have to type in the name of your computer and click next.

(22) Next you will be prompted for entering your server type out of primary domain controller, Backup domain controller or standalone server. If this is first server than select PDC, if it is second or any subsequent server in the selected domain and you want this server to replicate the user database with PDC so as to participate in user login authentication then select the server type as BDC. If you don’t want the database replication and the don’t want server to participate in login authentication then choose standalone server, in such a case your server will act like an application server or file server only.

(23) Next you will be prompted for giving administrator password. Type some suitable password and then retype the same as password. The password length should not exceed 14 characters.

(24) Next you will be prompted for creating the emergency repair disk. Insert a high density blank floppy in the floppy drive and choose the option yes.

(25) Next you will be asked the components which you want to install, accept the default choice and click next.

(26) Now the next installation phase(phase 2- Installing NT networking) appears. Choose and click next.


(27) The installation program will ask you how this computer will participate in the network, select “wired to the network”.

(28) Next the NT server installation program will ask you whether you want to install internet information server (IIS), in case you don’t want to set up intranet at this stage, deselect the choice by un-checking the box (you can install IIS later on at any time whenever you need it). Click next to continue.

(29) Next the installation program searches for your network adapter card. Click start search to begin the search. After your network card is found click next.

(30) Next the network protocol screen will appear, leave the default selection of TCP/IP and NWlink IPX/SPX compatible transport and click next.

(31) Next the network services screen will appear. Keep the default services, RPC configuration, NetBios interface, workstation and server. Click next.

(32) Click next to install your network components.



(33) Now the network card screen will appear. Check the hardware configuration or your card as per the one displayed on this screen, if it is ok, click next, else modify the options as required and then click continue.



(34) In the next stage you will be required to give the configuration detail of the protocols configure TCP/IP. You will get the following messages:- If there is a DHCP server you can find out from your system administrator. In case DHCP is not being used or you don’t want the IP addresses to be assigned automatically then in response to “Do you wish to use DHCP?”, choose No. the installation program will then copy files relating to selected network components.

(35) Next the IP address configuration screen will appear. Click the “specify an IP address option. As an example you can choose and type IP address, subnet mask and default gateway as follows:

• IP address 140.130.120.01

• Subnet Mask 255.255.00.00

• Default gateway leave blank

Click ok

(36) Next, the network binding screen will appear, keep all the default settings and click next.

(37) Then you will be prompted to click next to start the network.

(38) Windows NT set program will now ask you to enter your computer name and domain name, type in these two names of your choice and click next. The system will take few minutes to check for duplicate names.

(39) Now beings the third and installation program will see the finishing setup screen. Click finish and the installation program will complete some configuration information based on options selected by you earlier.

(40) The next screen will prompt you for date/time properties. NT picks up the date and time from your computer’s CMOS information. The time zone will default to Greenwich mean time, therefore select the appropriate settings for your time zone. Click close.

(41) Now NT will detect your video adapter. If the correct video adapter appears, choose ok. If the correct video adapter does not appear, you can customize it after completing the NT installation by going to control panel and selecting “Display”.

(42) You can test your display for the correct settings by selecting the test button and clicking OK and test your video driver, this will take around five seconds, NT will then ask “Did you see the Test bit map properly? If the display was correct select yes and click ok to save your settings. Click ok again to close the display settings window.

(43) NT will now complete copying the windows NT system files. When the file copying is complete, you will be prompted to insert a floppy and click ok. This will become your emergency repair disk. Finally remove any floppy and reboot your system. This is end of installation.

Novell Netware

What is Novell Netware?


Novell NetWare is a Novell network operating system (NOS) that provides transparent remote file access and numerous other distributed network services, including printer sharing and support for various applications, such as electronic mail transfer and database access.

NetWare is a network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, and the network protocols were based on the archetypal Xerox Network Systems stack.

NetWare has been superseded by Open Enterprise Server (OES). The latest version of NetWare is v6.5 Support Pack 8, which is identical to OES 2 SP1, NetWare Kernel.

HISTORY

In 1981 a small company called Novell Data Systems received investment of $8 Million to make and sell a new computer system. At the time, the only desktop systems around were made by Apple, Commodore Tandy and Psion. This new system would include a Printer, data drive as well as the base processor. Unfortunately, one month prior to the release of Novell Data Systems computer, IBM introduced their first PC. Jack Messman was working for the Venture Capital company that had invested in Novell Data Systems. Jack was tasked to recoup as much as of $8 Million as possible and was sent to Provo to oversee orderly liquidation of the company.

Whilst working late one night, Jack heard some commotion in the small warehouse attached to the offices he worked. Jack went to investigate, a fortuitous decision. What he discovered were three young men playing a game, the same game, but on different machines. The name of the game was ‘Snipes’ the very first PC networked game. A text based maze game, the idea was to ‘blast’ your opponent whilst negotiating the maze.

The company was renamed to Novell and by 1983 had gone public, also during 1983 Jack rejoined the board of directors.

By the late 1980’s Novell NetWare was the clear industry leader in Networking File and Print with over 70% market share.

What are the key events or dates in Novell's history?

1979 Creation of Novell Data Systems – Hardware company

1981 Jack Messman hires Superset to develop a data sharing PC network

1981 Product Named NetWare

1982 Ray Noorda appointed

1983 Novell goes public

1992 NetWare 4 released with innovative directory services

1999 NetWare 5 Released introducing open standards, cross platform services and pure IP support

1999 NDS renamed to eDirectory with version 8.5 release. Major performance and architecture changes including cross platform support for Windows, Linux, Solaris and UNIX

2001 Cambridge acquired giving Novell a focus on business solutions

2002 Silverstream Acquired giving application integration, client interaction and Web Services

2003 Novell launches Secure Identity Management Solutions that run in any environment

2003 Novell announces clear support for Open source community and Linux Kernel for NetWare 7

What do you see as Novell's future?

With the acquisition of Cambridge Technologies and Silverstream Novell can now deliver a complete solutions approach. Traditionally Novell has been seen as a product based company, now, with all the pieces in place Novell offers business solutions to real customer needs, end to end, meaning from the person (or application) that requires resource to the resources themselves. This encompasses, security, identity management, profiling, customized delivery, application integration, single sign on as well as the traditional file and print services now greatly enhanced. All our solutions are now oganised, under the One Net Vision, into collective solutions. NSURE – provides Secure Identity Management. Nterprise – delivers cross platform services, exteNd – gives application integration and Web Services with Ngage we can deliver project management, consultancy (business and technical) and Education along with our partners.

Current NetWare situation

While Novell NetWare is still used by some organizations, its ongoing decline in popularity began in the mid-1990s, when NetWare was the de facto standard for file and print software for the Intel x86 server platform. Modern (2009) NetWare and OES installations are used by larger organizations that may need the added flexibility they provide.

Microsoft successfully shifted market share away from NetWare products toward their own in the late-1990s. Microsoft's more aggressive marketing was aimed directly to management through major magazines; Novell NetWare's was through IT specialist magazines with distribution limited to select IT personnel.[citation needed]

Novell did not adapt their pricing structure accordingly and NetWare sales suffered at the hands of those corporate decision makers whose valuation was based on initial licensing fees. As a result organizations that still use NetWare, eDirectory, and Novell software often have a hybrid infrastructure of NetWare, Linux, and Windows servers.

Netware Lite / Personal Netware

In 1991 Novell introduced a radically different and cheaper product -Netware Lite in answer to Artisoft's similar LANtastic .Both were peer to peer systems, where no specialist server was required, but instead all PCs on the network could share their resources.

IMPLEMENTATION

Each agency should develop its own internal implementation plan. Such a plan should take into account such factors as:

• Availability of administrator time especially after business hours

• A timetable for upgrading the workstation client software

• Upgrading of server hardware

• Any downtime impact on user schedules

ITS will maintain a separate tree ("NCTEST") for testing purposes. Agencies may wish to install a test server to test various NDS organization structures before implementing them in production. Such servers should be installed in agency created test trees. Please contact ITS to obtain unique IPX internal numbers to assign to servers.

A significant factor in the successful implementation and operation of NetWare is the use of a server backup software product that works properly with Novell Directory Services. It is important that any backup product be fully compliant with Novell's Storage Management Specification (SMS). Generally, workstation-based backup products are neither NDS nor SMS compliant and therefore do not adequately backup trustee rights, server specific information, etc. Based on an evaluation of backup products, ITS recommends using Veritas' Backup Exec.

Administrators should obtain the latest versions of the ITS documents referenced above prior to beginning the actual installation.

Agencies will typically choose to upgrade/install NetWare servers over a weekend. NetWare is distributed on CD-ROM and therefore having a CD-ROM on the server is recommended.

Requirements

Verify that you:

• Are operating a Windows-based system with CentreWare DP software installed and at

• least one printer driver installed Are a NetWare Network Administrator, an administrative person with ADMIN/

• SUPERVISOR, or ADMIN/SUPERVISOR EQUIVALENT within login rights to the

• NetWare Server(s) servicing the Phaser printer Have a basic knowledge of NetWare

NTFS vs FAT File System

NTFS

NTFS is a high-performance and self-healing file system proprietary to Windows XP Vista 2003,2000, NT & Windows 7, which supports file-level security, compression and auditing. It also supports large volumes and powerful storage solution such as RAID. The most important new feature of NTFS is the ability to encrypt files and folders to protect your sensitive data.

NTFS is a Microsoft file system. It was introduced in Windows NT and has been the default file system for every version of Microsoft Windows since. NTFS replaced the aged FAT file system and addresses most of FAT's shortcomings. NTFS has been continuously maintained and improved by Microsoft, and the current version provides secure data storage that meets the requirements of modern hardware and usage. However, NTFS remains a closed standard, Microsoft does not publish its API nor implementation details. Therefore only Microsoft operating systems can use NTFS natively, and even OS's that are capable of reading and writing to NTFS cannot be installed on hard disks formatted as NTFS.

The NTFS acronym stands for New Technology File System. The name derives from the implementation of very innovative data storage techniques that were refined in NTFS. While none of the techniques are unique to NTFS, it is the first time that so many innovations were released at once on a production file system. The FAT file system had long been criticized for not including some of the more obvious improvements such as journaling, disk quotas, and file compression. However, these improvements made NTFS incompatible with previous versions of Windows, and also with hard disk tools designed for FAT file systems. For example, data recovery tools such as GetDataBack and partitioning tools such as PartitionMagic would run on Windows NT, yet could not function on the newer file system. This led to much frustration with users who had purchased licenses for these products before upgrading to Windows NT.

NTFS Master File Table (MFT)

Each file on an NTFS volume is represented by a record in a special file called the master file table (MFT). NTFS reserves the first 16 records of the table for special information. The first record of this table describes the master file table itself, followed by a MFT mirror record. If the first MFT record is corrupted, NTFS reads the second record to find the MFT mirror file, whose first record is identical to the first record of the MFT. The locations of the data segments for both the MFT and MFT mirror file are recorded in the boot sector. A duplicate of the boot sector is located at the logical center of the disk.

The third record of the MFT is the log file, used for file recovery. The seventeenth and following records of the master file table are for each file and directory (also viewed as a file by NTFS) on the volume.





Metafiles


The first 16 NTFS files (metafiles) are system files. Each of them is responsible for some aspect of system operation. The advantage of such modular approach is in amazing flexibility - for example on FAT the physical failure in the FAT area is fatal for all disk operation. As for NTFS it can displace and even fragment on the disk all system areas avoiding any damage of the surface except the first 16 MFT elements.

The metafiles are in the NTFS disk root directory, they start with a name character "$", though it is difficult to get any information about them by standard means. Curiously that even for these files the quite real size is reported, and it is possible to find out for example how many operating system spends on cataloguing of all your disk having looked at $MFT file size. In the following table the metafiles used at the moment and their function are indicated.

Files and streams

So the system has files and nothing except files. What does this concept on NTFS include?

First of all the compulsory element is the record in MFT. As it was said above all disk files are mentioned in MFT. All information about a file except data itself is stored in this place: a file name, its size, separate fragments position on the disk, etc. If one MFT record is not enough for information, then several records are used and not obligatory one after another. Optional element is file data streams. The definition "optional" seems to be a bit strange but nevertheless there is nothing strange here. Firstly a file may not have data and in this case disk free space isn't used on it. Secondly a file may have not very big size. Then a rather successful decision is applied: file data are stored just in MFT, in the place free from the master data in limits of one MFT record. The files with the size of hundreds byte usually don't have "physical" image in the fundamental file area. All such file data are stored in one place - in MFT.

The directories

The directory on NTFS is a specific file storing the references to other files and directories establishing the hierarchical constitution of disk data. The directory file is divided into blocks, each of them contains a file name, base attributes and reference to the element MFT which already gives the complete information on an element of the directory. The inner structure of the directory is a binary tree. It means that to search the file with the given name in the linear directory such for example as for FAT, the operating system should look through all elements of the directory until it finds the necessary one. The binary tree allocates the names of files to make the file search faster - with the help of obtaining binary answers to the questions about the file position. The binary tree is capable to give the answer to the question in what group the required name is situated - above or below the given element. We begin with such question to the average element, and each answer narrows the area of search twice. The files are sorted according to the alphabet, and the answer to the question is carried out by the obvious way - matching of initial letters. The search area which has been narrowed twice starts to be researched the same way starting again from the average element.


FAT


The FAT file system was first introduced in the days of MS-DOS way back in 1981. The purpose of the File Allocation Table is to provide the mapping between clusters - the basic unit of logical storage on a disk at the operating system level - and the physical location of data in terms of cylinders, tracks and sectors - the form of addressing used by the drive's hardware controller.

The FAT contains an entry for every file stored on the volume that contains the address of the file's starting cluster. Each cluster contains a pointer to the next cluster in the file, or an end-of-file indicator at (0xFFFF), which indicates that this cluster is the end of the file. The diagram shows three files: File1.txt uses three clusters, File2.txt is a fragmented file that requires three clusters and File3.txt fits in one cluster. In each case, the file allocation table entry points to the first cluster of the file.


The FAT16 File System


The FAT16 file system is compatible with the majority of operating systems. This is evident by MS DOS, Windows 95, Windows 98, Windows Me, Windows NT, Windows 2000 and Windows XP being able to utilize the FAT16 file system. FAT16 generally works well in managing disk space when the size of the volume is less than 256MB. You should refrain from using FAT16 on volumes that are larger than 512MB. FAT16 cannot be utilized on volumes that exceed 4 GB.

FAT16 maps clusters on the FAT partition. A cluster is the smallest unit that the OS operating system utilizes when it assigns space on the partition. A cluster is also at times referred to as an allocation unit.

The file allocation table identifies a cluster in the FAT partition as either:

• Unused

• Cluster in use by a file

• Bad cluster

• Last cluster in a file

The FAT16 volume is structured as follows:

• Boot sector on the system partition

• The primary file allocation table

• The copy or duplicate file allocation table

• A root folder

• Other folders and all files


The root folder holds an entry for each file and folder stored on the FAT16 volume and has its maximum number of table entries set at 512 for each disk drive. A file's or folder's entry contains the information listed below:

• Name: This is in 8.3 format

• Attribute: 8 bits

• Create time: 24 bits

• Create date: 16 bits

• Last access date: 16 bits

• Last modified time: 16 bits

• Last modified date: 16 bits

• Starting cluster number in the file allocation table: 16 bits

• File size: 32 bits

Disavantages of FAT16

A few disadvantages associated with the FAT16 file system are summarized below:

• The FAT16 file system has no local security for the file system or compression features.

• The boot sector is not backed up.

• The root folder can only have a maximum of 512 entries which means that files which have long names can greatly decrease the number of entries available.

• FAT16 does not work well with volume sizes that are large.

The FAT32 File System

The FAT32 file system can handle larger partitions than what the FAT16 file system can handle. FAT32 can support partitions up to 2047 GB in size compared to FAT16's 4 GB. With FAT32, there is no restriction on the number of entries that the root folder can contain. With FAT16, the root folder could only contain a maximum of 512 entries. The boot sector is also backed up on FAT32 volumes. A FAT32 volume must however have a minimum of 65,527 clusters.

The FAT32 architecture is very much like the architecture of the FAT16 file system. FAT32 was designed with little architectural changes to ensure compatibility with existing programs and device drivers. What this means is that device drivers and FAT tools used for FAT16 partitions would continue to work for FAT32 partitions.

FAT32 does however need 4 bytes in the file allocation table to store cluster values. This has led to the revision or expansion of internal data structures, on-disk data structures and published APIs.

A few disadvantages associated with the FAT32 file system are summarized below:

• Like the FAT16 file system, the FAT32 file system includes no local security for the files system or compression features.

• The MS-DOS, Windows 95, and Windows NT 4.0 OSs are unable to access or read FAT32 partitions.

• Both FAT16 and FAT32 partitions do not scale well - the file allocation table increases in size as the volume grows.

NTFS vs FAT


NTFS vs FAT32






























Thursday, March 11, 2010



Memory Management
MEMORY MANAGEMENT
The memory management subsystem is one of the most important parts of the operating system. Since the early days of computing, there has been a need for more memory than exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of these is virtual memory. Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it.Virtual memory does more than just make your computer's memory go further.

The memory management subsystem providesLarge Address Spacesthe operating system makes the system appear as if it has a larger amount of memory than it actually has. The virtual memory can be many times larger than the physical memory in the systemProtectionEach process in the system has its own virtual address space. These virtual address spaces are completely separate from each other and so a process running one application cannot affect another. Also, the hardware virtual memory mechanisms allow areas of memory to be protected against writing.
This protects code and data from being overwritten by rogue applications.Memory MappingMemory mapping is used to map image and data files into a processes address space. In memory mapping, the contents of a file are linked directly into the virtual address space of a process.Fair Physical Memory AllocationThe memory management subsystem allows each running process in the system a fair share of the physical memory of the system,Shared Virtual MemoryAlthough virtual memory allows processes to have separate (virtual) address spaces, there are times when you need processes to share memory.
For example there could be several processes in the system running the bash command shell. Rather than have several copies of bash, one in each processes virtual address space, it is better to have only one copy in physical memory and all of the processes running bash share it. Dynamic libraries are another common example of executing code shared between several processes.Shared memory can also be used as an Inter Process Communication (IPC) mechanism, with two or more processes exchanging information via memory common to all of them. Linux supports the Unix TM System V shared memory IPC.
Demand pagingas:
there is much less physical memory than virtual memory the operating system must be careful that it does not use the physical memory inefficiently. One way to save physical memory is to only load virtual pages that are currently being used by the executing program.
For example, a database program may be run to query a database. In this case not the entire database needs to be loaded into memory, just those data records that are being examined. If the database query is a search query then it does not make sense to load the code from the database program that deals with adding new records. This technique of only loading virtual pages into memory as they are accessed is known as demand paging.When a process attempts to access a virtual address that is not currently in memory the processor cannot find a page table entry for the virtual page referenced. There is no entry in process X's page table for virtual page frame number 2 and so if process X attempts to read from an address within virtual page frame number 2 the processor cannot translate the address into a physical one. At this point the processor notifies the operating system that a page fault has occurred.
Swappingif a process needs to bring a virtual page into physical memory and there are no free physical pages available, the operating system must make room for this page by discarding another page from physical memory.If the page to be discarded from physical memory came from an image or data file and has not been written to then the page does not need to be saved. Instead it can be discarded and if the process needs that page again it can be brought back into memory from the image or data file.However, if the page has been modified,
the operating system must preserve the contents of that page so that it can be accessed at a later time. This type of page is known as a dirty page and when it is removed from memory it is saved in a special sort of file called the swap file. Accesses to the swap file are very long relative to the speed of the processor and physical memory and the operating system must juggle the need to write pages to disk with the need to retain them in memory to be used again.
Physical and Virtual Addressing Modes
It does not make much sense for the operating system itself to run in virtual memory. This would be a nightmare situation where the operating system must maintain page tables for itself. Most multi-purpose processors support the notion of a physical address mode as well as a virtual address mode. Physical addressing mode requires no page tables and the processor do not attempt to perform any address translations in this mode. The Linux kernel is linked to run in physical address space.The Alpha AXP processor does not have a special physical addressing mode. Instead, it divides up the memory space into several areas and designates two of them as physically mapped addresses. This kernel address space is known as KSEG address space and it encompasses all addresses upwards from 0xfffffc0000000000. In order to execute from code linked in KSEG (by definition, kernel code) or access data there, the code must be executing in kernel mode. The Linux kernel on Alpha is linked to execute from address 0xfffffc0000310000.
Access ControlThe page table entries also contain access control information. As the processor is already using the page table entry to map a processes virtual address to a physical one, it can easily use the access control information to check that the process is not accessing memory in a way that it should not.
Caches
If you were to implement a system using the above theoretical model then it would work, but not particularly efficiently. Both operating system and processor designers try hard to extract more performance from the system. Apart from making the processors, memory and so on faster the best approach is to maintain caches of useful information and data that make some operations faster. Linux uses a number of memory management related cachesBuffer CacheThe buffer cache contains data buffers that are used by the block device drivers.These buffers are of fixed sizes (for example 512 bytes) and contain blocks of information that have either been read from a block device or are being written to it. The buffer cache is indexed via the device identifier and the desired block number and is used to quickly find a block of data. Block devices are only ever accessed via the buffer cache. If data can be found in the buffer cache then it does not need to be read from the physical block device, for example a hard disk, and access to it is much faster.Page Cachethis is used to speed up access to images and data on disk.It is used to cache the logical contents of a file a page at a time and is accessed via the file and offset within the file. As pages are read into memory from disk, they are cached in the page cache.Swap Cacheonly modified (or dirty) pages are saved in the swap file.So long as these pages are not modified after they have been written to the swap file then the next time the page is swapped out there is no need to write it to the swap file as the page is already in the swap file. Instead the page can simply be discardedHardware CachesOne commonly implemented hardware cache is in the processor; a cache of Page Table Entries. In this case, the processor does not always read the page table directly but instead caches translations for pages as it needs them. These are the Translation Look-aside Buffers and contain cached copies of the page table entries from one or more processes in the system.Memory Mappingwhen an image is executed, the contents of the executable image must be brought into the processes virtual address space. The same is also true of any shared libraries that the executable image has been linked to use. The executable file is not actually brought into physical memory; instead it is merely linked into the processes virtual memory. Then, as the parts of the program are referenced by the running application, the image is brought into memory from the executable image. This linking of an image into a processes virtual address space is known as memory mapping.Areas of Virtual Memoryevery processes virtual memory is represented by an mistrust data structure. This contains information about the image that it is currently executing (for example bash) and also has pointers to a number of vim_ area struck data structures. Each vim area_ struck data structure describes the start and end of the area of virtual memory, the processes access rights to that memory and a set of operations for that memory. These operations are a set of routines that Linux must use when manipulating this area of virtual memory. For example, one of the virtual memory operations performs the correct actions when the process has attempted to access this virtual memory but finds (via a page fault) that the memory is not actually in physical memory. This operation is the no page operation. The no page operation is used when Linux demand pages the pages of an executable image into memory.Demand pagingonce an executable image has been memory mapped into a processes virtual memory it can start to execute. As only the very start of the image is physically pulled into memory it will soon access an area of virtual memory that is not yet in physical memory. When a process accesses a virtual address that does not have a valid page table entry, the processor will report a page fault to Linux.The page fault describes the virtual address where the page fault occurred and the type of memory access that caused.
Dynamic memory allocationin computer science, dynamic memory allocation (also known as heap-based memory allocation) is the allocation of memory storage for use in a computer program during the runtime of that program. It can be seen also as a way of distributing ownership of limited memory resources among many pieces of data and code.Dynamically allocated memory exists until it is released either explicitly by the programmer, or by the garbage collector. This is in contrast to static memory allocation, which has a fixed duration. It is said that an object so allocated has a dynamic lifetime.Garbage collectionin computer science, garbage collection (GC) is a form of automatic memory management. It is a special case of resource management, in which the limited resource being managed is memory. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program. Garbage collection was invented by John McCarthy around 1959 to solve problems in Lisp.Memory management unitA memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware component responsible for handling accesses to memory requested by the CPU. Its functions include translation of virtual addresses to physical addresses (i.e., virtual memory management), memory protection, cache control, bus arbitration, and, in simpler computer architectures (especially 8-bit systems), bank switching.Page tableA page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the CPU, i.e., RAM.PagingThis article is about computer virtual memory. For the wireless communication devices, see pager. Bank switching is also sometimes referred to as paging. Page flipping is also sometimes referred to as paging.In computer operating systems there are various ways in which the operating system can store and retrieve data from secondary storage for use in main memory. One such memory management scheme is referred to as paging. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The main advantage of paging is that it allows the physical address space of a process to be noncontiguous.
Virtual memory
The program thinks it has a large range of contiguous addresses; but in reality the parts it is currently using are scattered around RAM, and the inactive parts are saved in a disk file.Virtual memory is a computer system technique developed at the University of Manchester, which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage.Developed for multitasking kernels, virtual memory provides two primary functions:Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode.Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden.


Process Scheduling
PROCESS STATE DIAGRAMPROCESS SCHEDULING


WHAT IS A PROCESS?A process can simply be defined as a program in execution. it can be defined as a program currently making use of the processor at any one time. The diagram below shows the various states of a process:
A process can be on any of the following states:
Ready:
This is when the process is ready to be run on the processor.
Running:
This is when the process is currently making use of the processor.
Blocked:
This is when the process is waiting for an input such as user response or data from another process. A process may be in the blocked state if it needs to access a resource.Other variations of the above named states are:
Ready Suspend:
This is when a process is swapped out of a memory by Memory Management system in order to free memory for other process.
Blocked Suspend:
This is when a process is swapped out of memory after incurring an O/I wait

Terminate:
This is when a process has finished its run.
Summary
Only one process at a time is running on the CPU

Process gives up CPU:

If it starts waiting for an event

Otherwise: other processes need fair access

OS schedules which ready process to run next

Time slice or quantum for each process

Scheduling algorithms

affect performance

SCHEDULING

Scheduling is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than there are available CPUs. This assignment is carried out by software known as a scheduler or dispatcher.

SCHEDULER

The scheduler is concerned mainly with:CPU utilization - to keep the CPU as busy as possible.Throughput - number of processes that complete their execution per time unit.Turnaround - total time between submission of a process and its completion.Waiting time - amount of time a process has been waiting in the ready queue. Response Time- amount of time it takes from when a request was submitted until the first response is produced.Fairness - Equal CPU time to each thread.

Types of schedulers
Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler and a short-term scheduler (also known as a dispatcher). The names suggest the relative frequency with which these functions are performed.

1. Long-term Scheduler
The long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - ie: whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern OS's, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish.

2. Mid-term Scheduler
The mid-term scheduler temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as "swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The mid-term scheduler may decide to swap out a process which has not been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource.

In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or "lazy loaded".
3. Short-term Scheduler
The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the scheduler is unable to "force" processes off the CPU.
Dispatcher
Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following:
Switching context
Jumping to the proper location in the user program to restart that program The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.
Scheduling criteriaDifferent CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best.
The criteria include the following:
1. CPU Utilization. We want to keep the CPU as busy as possible.
2. Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10 processes per second.
3. Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time. The CPU scheduling algorithm does not affect the amount of the time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of periods spend waiting in the ready queue.
5. Response time. In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response. The turnaround time is generally limited by the speed of the output device.It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, under some circumstances, it is desirable to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users get good service, we may want to minimize the maximum response time. Investigators have suggested that, for interactive systems, it is more important to minimize the variance in the response time than to minimize the average response time. A system with reasonable and predictable response time may be considered more desirable than a system that is faster on the average but is highly variable. However, little work has been done on CPU-scheduling algorithms that minimize variance.
CPU-boundmost of its time doing computation - little I/OI/O-boundmost of its time doing I/O - little computationMultilevel schedulingClassified into different groups :
foreground (interactive) vs.
background (batch)
each group has its own ready queue

Preemptive Vs Nonpreemptive Scheduling
The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts.

Nonpreemptive Scheduling
A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process.Following are some characteristics of nonpreemptive schedulingIn nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of all processes is fair.In nonpreemptive system, response times are more predictable because incoming high priority jobs can not displace waiting jobs.In nonpreemptive scheduling, a schedular executes jobs in the following two situations.When a process switches from running state to the waiting state.When a process terminates.

Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.




Presentation Data (HPFS & Ext3)
HPFS and Ext3
High Performance File SystemHPFS or High Performance File System is a file system created specifically for the OS/2 operating system to improve upon the limitations of the FAT file system. It was written by Gordon Letwin and others at Microsoft and added to OS/2 version 1.2, at that time still a joint undertaking of Microsoft and IBM.The HPFS file system was first introduced with OS/2 1.2 to allow for greater access to the larger hard drives that were then appearing on the market. Additionally, it was necessary for a new file system to extend the naming system, organization, and security for the growing demands of the network server market. HPFS maintains the directory organization of FAT, but adds automatic sorting of the directory based on filenames. Filenames are extended to up to 254 double byte characters. HPFS also allows a file to be composed of "data" and special attributes to allow for increased flexibility in terms of supporting other naming conventions and security. In addition, the unit of allocation is changed from clusters to physical sectors (512 bytes), which reduce lost disk space.Under HPFS, directory entries hold more information than under FAT. As well as the attribute file, this includes information about the modification, creation, and access date and times. Instead of pointing to the first cluster of the file, the directory entries under HPFS point to the FNODE. The FNODE can contain the file's data, or pointers that may point to the file's data or to other structures that will eventually point to the file's data.HPFS attempts to allocate as much of a file in contiguous sectors as possible. This is done in order to increase speed when doing sequential processing of a file.HPFS organizes a drive into a series of 8 MB bands, and whenever possible a file is contained within one of these bands. Between each of these bands are 2K allocation bitmaps, which keep track of which sectors within a band have and have not been allocated. Banding increases performance because the drive head does not have to return to the logical top (typically cylinder 0) of the disk, but to the nearest band allocation bitmap to determine where a file is to be stored.
Additionally, HPFS includes a couple of unique special data objects:1. Super BlockThe Super Block is located in logical sector 16 and contains a pointer to the FNODE of the root directory. One of the biggest dangers of using HPFS is that if the Super Block is lost or corrupted due to a bad sector, so are the contents of the partition, even if the rest of the drive is fine. It would be possible to recover the data on the drive by copying everything to another drive with a good sector 16 and rebuilding the Super Block. However, this is a very complex task.2. Spare BlockThe Spare Block is located in logical sector 17 and contains a table of "hot fixes" and the Spare Directory Block. Under HPFS, when a bad sector is detected, the "hot fixes" entry is used to logically point to an existing good sector in place of the bad sector. This technique for handling write errors is known as hot fixing. Hot fixing is a technique where if an error occurs because of a bad sector, the file system moves the information to a different sector and marks the original sector as bad. This is all done transparent to any applications that are performing disk I/O (that is, the application never knows that there were any problems with the hard drive). Using a file system that supports hot fixing will eliminate error messages such as the FAT "Abort, Retry, or Fail?"error message that occurs when a bad sector is encountered.
Note: The version of HPFS that is included with Windows NT does not support hot fixing.Among its improvements are:
support for mixed case file names, in different code pages
support for long file names (256 characters as opposed to FAT's 8+3 characters)
more efficient use of disk space (files are not stored using multiple-sector clusters but on a per-sector basis)
an internal architecture that keeps related items close to each other on the disk volume
less fragmentation of data
extent-based space allocation
separate datestamps for last modification, last access, and creation (as opposed to FAT's one last modification datestamp)
a B+ tree structure for directories
root directory located at the mid-point, rather than beginning of the disk, for faster average access
HPFS also can keep 64 KB of metadata ("extended attributes") per file.
IBM offers two kind of IFS drivers for this file system:
the standard one with a cache limited to 2 MB
HPFS386 provided with the server versions of OS/2
Windows Native SupportWindows 95 and its successors Windows 98, Windows Me can read/write HPFS only when mapped via a network share, but cannot read it from a local disk. They listed the NTFS partitions of networked computers as "HPFS", because NTFS and HPFS share the same filesystem identification number in the partition table.Windows NT 3.1 and 3.5 have native read/write support for local disks and can even be installed onto an HPFS partition. This is because NT was originally going to be a version of OS/2.Windows NT 3.51 can also read and write from local HPFS formatted drives. However, Microsoft discouraged using HPFS in Windows NT 4 and in subsequent versions. Microsoft even removed the ability of NT 3.51 to format an HPFS file system. Starting with Windows NT 4 the file system driver pinball.sys enabling the read/write access is not included in a default installation anymore. Pinball.sys is included on the installation media for Windows 2000 and can be manually installed and used with some limitations. Later Windows versions do not ship with this driver.Microsoft retained rights to OS/2 technologies, including the HPFS file system, after they ceased collaboration. Since Windows NT 3.1 was designed for more rigorous (enterprise-class) use than previous versions of Windows, it included support for HPFS (and NTFS) giving it a larger storage capacity than FAT file systems. However, since HPFS lacks a journal, any recovery after an unexpected shutdown or other error state takes progressively longer as the file system grows. A utility such as CHKDSK would need to scan each entry in the file system to ensure no errors are present, a problem which is vastly reduced on NTFS where the journal is simply replayed.Advantages of HPFS
HPFS is best for drives in the 200-400 MB range.
Support for long file names upto 256 characters.
Upper and lower case- HPFS preserves case, but it is not case sensitive
Native support for EA’s FAT is just too fragile to support this and the workplace spell depends on it heavily.
HPFS provides high performance.
Much greater integrity: Signature at the beginning of the system structure sectors, forwards and backwards links in fnode trees.
Much less fragmentation.
Disadvantages of HPFS
Because of the overhead involved in HPFS, it is not a very efficient choice for a volume of under approximately 200 MB. In addition, with volumes larger than about 400 MB, there will be some performance degradation.
You cannot set security on HPFS under WindowsNT.
HPFS is only supported under Windows NT versions 3.1, 3.5, and 3.51. Windows NT 4.0 cannot access HPFS partitions.
Ext3The ext3 or third extended file system is a journaled file system that is commonly used by the Linux kernel. It is the default file system for many popular Linux distributions. Stephen Tweedie first revealed that he was working on extending ext2 in Journaling the Linux ext2fs File system in a 1998 paper and later in a February 1999 kernel mailing list posting, and the file system was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward. Its main advantage over ext2 is journaling which improves reliability and eliminates the need to check the file system after an unclean shutdown. Its successor is ext4.Journaling results in massively reduced time spent recovering a file system after a crash, and is therefore in high demand in environments where high availability is important, not only to improve recovery times on single machines but also to allow a crashed machine's file system to be recovered on another machine when we have a cluster of nodes with a shared disk.Advantages Although its performance (speed) is less attractive than competing Linux file systems such as JFS, ReiserFS and XFS, it has a significant advantage in that it allows in-place upgrades from the ext2 file system without having to back up and restore data. Ext3 also uses less CPU power than ReiserFS and XFS. It is also considered safer than the other Linux file systems due to its relative simplicity and wider testing base.The ext3 file system adds, over its predecessor:
A Journaling file system
Online file system growth
Htree indexing for larger directories. An HTree is a specialized version of a B-tree (not to be confused with the H tree fractal).
Without these, any ext3 file system is also a valid ext2 file system. This has allowed well-tested and mature file system maintenance utilities for maintaining and repairing ext2 file systems to also be used with ext3 without major changes. The ext2 and ext3 file systems share the same standard set of utilities, e2fsprogs, which includes a fsck tool. The close relationship also makes conversion between the two file systems (both forward to ext3 and backward to ext2) straightforward.While in some contexts the lack of "modern" file system features such as dynamic inode allocation and extents could be considered a disadvantage, in terms of recoverability this gives ext3 a significant advantage over file systems with those features. The file system metadata is all in fixed, well-known locations, and there is some redundancy inherent in the data structures that may allow ext2 and ext3 to be recoverable in the face of significant data corruption, where tree-based file systems may not be recoverable.What is a Journaling File system?A journaling file system keeps a journal or log of the changes that are being made to the file system during disk writing that can be used to rapidly reconstruct corruptions that may occur due to events such a system crash or power outage. The level of journaling performed by the file system can be configured to provide a number of levels of logging depending on your needs and performance requirements.What are the Advantages of a Journaling File system?There are a number of advantages to using a journaling files system:
Both the size and volume of data stored on disk drives has grown exponentially over the years. The problem with a non-journaled file system is that following a crash the fsck (file system consistency check) utility has to be run. fsck will scan the entire file system validating all entries and making sure that blocks are allocated and referenced correctly. If it finds a corrupt entry it will attempt to fix the problem. The issues here are two-fold. Firstly, the fsck utility will not always be able to repair damage and you will end up with data in the lost+found directory. This is data that was being used by an application but the system no longer knows where they were reference from. The other problem is the issue of time. It can take a very long time to complete the fsck process on a large file system leading to unacceptable down time.A journaled file system records information in a log area on a disk (the journal and log do not need to be on the same device) during each write. This is a essentially an "intent to commit" data to the file system. The amount of information logged is configurable and ranges from not logging anything, to logging what is known as the "metadata" (i.e ownership, date stamp information etc), to logging the "metadata" and the data blocks that are to be written to the file. Once the log is updated the system then writes the actual data to the appropriate areas of the file system and marks an entry in the log to say the data is committed.After a crash the file system can very quickly be brought back on-line using the journal log reducing what could take minutes using fsck to seconds with the added advantage that there is considerably less chance of data loss or corruption.What is a Journal Checkpoint?When a file is accessed on the filesystem, the last snapshot of that file is read from the disk into memory. The journal log is then consulted to see if any uncommitted changes have been made to the file since the data was last written to the file (essentially looking for an "intention to commit" in the log entry as described above). At particular points the filesystem will update file data on the disk from the uncommited log entries and trim those entries from the log. Committing operations from the log and synchronizing the log and its associated filesystem is called a checkpoint.What are the disadvantages of a Journaled Filesystem?Nothing in life is is free and ext3 and journaled filesystems are no exception to the rule. The biggest draw back of journaling is in the area of performance simply because more disk writes are required to store information in the log. In practice, however, unless you are running system where disk performance is absolutely critical the performance difference will be negligable.What Journaling Options are Available with the ext3 filesystem?The ext3 file system provides three options. These are as follows:Journal (lowest risk)Both metadata and file contents are written to the journal before being committed to the main file system. Because the journal is relatively continuous on disk, this can improve performance in some circumstances. In other cases, performance gets worse because the data must be written twice - once to the journal, and once to the main part of the file system.Ordered (medium risk)Only metadata is journaled; file contents are not, but it's guaranteed that file contents are written to disk before associated metadata is marked as committed in the journal. This is the default on many Linux distributions. If there is a power outage or kernel panic while a file is being written or appended to, the journal will indicate the new file or appended data has not been "committed", so it will be purged by the cleanup process. (Thus appends and new files have the same level of integrity protection as the "journaled" level.) However, files being overwritten can be corrupted because the original version of the file is not stored. Thus it's possible to end up with a file in an intermediate state between new and old, without enough information to restore either one or the other (the new data never made it to disk completely, and the old data is not stored anywhere). Even worse, the intermediate state might intersperse old and new data, because the order of the write is left up to the disk's hardware. XFS uses this form of journaling.Writeback (highest risk)Only metadata is journaled; file contents are not. The contents might be written before or after the journal is updated. As a result, files modified right before a crash can become corrupted. For example, a file being appended to may be marked in the journal as being larger than it actually is, causing garbage at the end. Older versions of files could also appear unexpectedly after a journal recovery. The lack of synchronization between data and journal is faster in many cases. JFS uses this level of journaling, but ensures that any "garbage" due to unwritten data is zeroed out on reboot.Does the Journal log have to be on the same disk as the file system?No, the ext3 journal log does not have to be on the same physical device as the file system it is logging. On a Red Hat Linux the journal device can be specified using the journal_device= option with the -journal-options command line argument of the tune2fs utility.Features of ext3The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages:AvailabilityAfter an unexpected power failure or system crash (also called an unclean system shutdown), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. This is a time-consuming process that can delay system boot time significantly, especially with large volumes containing a large number of files. During this time, any data on the volumes is unreachable.The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain consistency. The default journal size takes about a second to recover, depending on the speed of the hardware.Data IntegrityThe ext3 file system provides stronger data integrity in the event that an unclean system shutdown occurs. The ext3 file system allows you to choose the type and level of protection that your data receives. By default, Red Hat Linux 8.0 configures ext3 volumes to keep a high level of data consistency with regard to the state of the file system.SpeedDespite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade offs in regards to data integrity.Easy TransitionIt is easy to change from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting.Why ext3?Ext3 is forward and backward compatible with ext2, allowing users to keep existing file systems while very simply adding journaling capability. Any user who wishes to un-journal a file system can do so easily (not that we expect many to do so...). Furthermore, an ext3 file system can be mounted as ext2 without even removing the journal, as long as a recent version of e2fsprogs (such as the one included in Red Hat Linux 7.2) is installed.Ext3 benefits from the long history of fixes and enhancements to the ext2 file system, and will continue to do so. This means that ext3 shares ext2's well-known robustness, but also that as new features are added to ext2, they can be carried over to ext3 with little difficulty. When, for example, extended attributes or HTrees are added to ext2, it will be relatively easy to add them to ext3. (The extended attributes feature will enable things like access control lists; HTrees make directory operations extremely fast and highly scalable to very large directories.)Ext3, like ext2, has a multi-vendor team of developers who develop it and understand it well; its development does not depend on any one person or organization.Ext3 provides and makes use of a generic journaling layer (jbd) which can be used in other contexts. ext3 can journal not only within the file system, but also to other devices, so as NVRAM devices become available and supported under Linux, ext3 will be able to support them.Ext3 has multiple journaling modes. It can journal all file data and metadata (data=journal), or it can journal metadata but not file data (data=ordered or data=writeback). When not journaling file data, you can choose to write file system data before metadata (data=ordered; causes all metadata to point to valid data), or not to handle file data specially at all (data=writeback; file system will be consistent, but old data may appear in files after an unclean system shutdown). This gives the administrator the power to make the tradeoff between speed and file data consistency, and to tune speed for specialized usage patterns.Ext3 has broad cross-platform compatibility, working on 32- and 64- bit architectures, and on both little-endian and big-endian systems. Any system (currently including many Unix clones and variants, BeOS, and Windows) capable of accessing files on an ext2 file system will also be able to access files on an ext3 file system.Ext3 does not require extensive core kernel changes and requires no new system calls, thus presenting Linus Torvalds no challenges that would effecitvely prevent him from integrating ext3 into his official Linux kernel releases. Ext3 is already integrated into Alan Cox's -ac kernels, slated for migration to Linus's official kernel soon.The e2fsck file system recovery program has a long and proven track record of successful data recovery when software or hardware faults corrupt a file system. ext3 uses this same e2fsck code for salvaging the file system after such corruption, and therefore it has the same robustness against catastrophic data loss as ext2 in the presence of data-corruption faults.Size limitsExt3 has a maximum size for both individual files and the entire filesystem. For the filesystem as a whole that limit is 232 blocks. Both limits are dependent on the block size of the filesystem; the following chart summarizes the limits:Block sizeMax file sizeMax filesystem size1 KB16 GB2 TB2 KB256 GB8 TB4 KB2 TB16 TB8 KB2 TB32 TBDisadvantagesFunctionalitySince ext3 aims to be backwards compatible with the earlier ext2, many of the on-disk structures are similar to those of ext2. Because of that, ext3 lacks a number of features of more recent designs, such as extents, dynamic allocation of inodes, and block sub allocation. There is a limit of 31998 sub-directories per one directory, stemming from its limit of 32000 links per inode.ext3, like most current Linux filesystems, cannot be fsck-ed while the filesystem is mounted for writing. Attempting to check a file system that is already mounted may detect bogus errors where changed data has not reached the disk yet, and corrupt the file system in an attempt to "fix" these errors.DefragmentationThere is no online ext3 defragmentation tool that works on the filesystem level. An offline ext2 defragmenter, e2defrag, exists but requires that the ext3 filesystem be converted back to ext2 first. But depending on the feature bits turned on in the filesystem, e2defrag may destroy data; it does not know how to treat many of the newer ext3 features.There are userspace defragmentation tools like Shake and defrag. Shake works by allocating space for the whole file as one operation, which will generally cause the allocator to find contiguous disk space. It also tries to write files used at the same time next to each other. Defrag works by copying each file over itself. However they only work if the filesystem is reasonably empty. A true defragmentation tool does not exist for ext3.That being said, as the Linux System Administrator Guide states, "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."While ext3 is more resistant to file fragmentation than the FAT filesystem, nonetheless ext3 filesystems can get fragmented over time or on specific usage patterns, like slowly-writing large files. Consequently the successor to the ext3 filesystem, ext4, includes a filesystem defragmentation utility and support for extents (contiguous file regions).RecoveryThere is no support of deleted file recovery in file system design. Ext3 driver actively deletes files by wiping file inodes for crash safety reasons. That's why accidental 'rm -rf ...' may cause permanent data loss.There are still several techniques and some commercial software like UFS Explorer Standard Recovery version 4 for recovery of deleted or lost files using file system journal analysis; however, they do not guarantee any specific file recovery.There is no chance of file recovery after file system format.CompressionSupport for transparent compression is available as an unofficial patch for ext3. This patch is a direct port of e2compr and still needs further development, it compiles and boots well with upstream kernels but journaling is not implemented yet. The current patch is named e3compr.No checksumming in journalExt3 does not do checksumming when writing to the journal. If barrier=1 is not enabled as a mount option (in /etc/fstab), and if the hardware is doing out-of-order write caching, one runs the risk of severe filesystem corruption during a crash.Consider the following scenario: If hard disk writes are done out-of-order (due to modern hard disks caching writes in order to amortize write speeds), it is likely that one will write a commit block of a transaction before the other relevant blocks are written. If a power failure or unrecoverable crash should occur before the other blocks get written, the system will have to be rebooted. Upon reboot, the file system will replay the log as normal, and replay the "winners" (transactions with a commit block, including the invalid transaction above which happened to be tagged with a valid commit block). The unfinished disk write above will thus proceed, but using corrupt journal data. The file system will thus mistakenly overwrite normal data with corrupt data while replaying the journal. There is a test program available to trigger the problematic behavior. If checksums had been used, where the blocks of the "fake winner" transaction were tagged with a mutual checksum, the file system could have known better and not replayed the corrupt data onto the disk. Journal checksumming has been added to EXT4.EXT3 distributionThe EXT3 filesystem patch distributions and design papers are available from ftp://ftp.kernel.org/pub/linux/kernel/people/sct/ext3Alternately, these materials are available from ftp://ftp.uk.linux.org/pub/linux/sct/fs/jfs/The EXT3 author and maintainer, Stephen Tweedie, may be reached at sct@redhat.com

Windows XP
Windows XP
Windows XP is an operating system produced by Microsoft for use on personal computers, including home and business desktops, laptops, and media centers. It was released in 2001. The name "XP" is short for "eXPerience".Windows XP is the successor to both Windows 2000 Professional and Windows Me, and is the first consumer-oriented operating system produced by Microsoft to be built on the Windows NT kernel and architecture. Windows XP was first released on October 25, 2001, and over 400 million copies were in use in January 2006, according to an estimate in that month by an IDC analyst. It was succeeded by Windows Vista, which was released to volume license customers on November 8, 2006, and worldwide to the general public on January 30, 2007. Direct OEM and retail sales of Windows XP ceased on June 30, 2008. Microsoft continued to sell XP through their System Builders (smaller OEMs who sell assembled computers) program until January 31, 2009. XP may continue to be available as these sources run through their inventory or by purchasing Windows Vista Ultimate or Business and then downgrading to Windows XP.
FEATURES
windows XP introduced several new features to the Windows line, including:Faster start-up and hibernation sequencesThe ability to discard a newer device driver in favor of the previous one (known as driver rollback), should a driver upgrade not produce desirable resultsA new, arguably more user-friendly interface, including the framework for developing themes for the desktop environmentFast user switching, which allows a user to save the current state and open applications of their desktop and allow another user to log on without losing that informationThe ClearType font rendering mechanism, which is designed to improve text readability on Liquid Crystal Display (LCD) and similar monitors .
Built on the new Windows engine
Enhanced device driver verifier
Dramatically reduced reboot scenarios
Improved code protection
Side-by-side DLL support
Windows File Protection
Windows Installer
Enhanced software restriction policies
Preemptive multitasking architecture
Scalable memory and processor support
Encrypting File System (EFS) with multi-user support
IP Security (IPSec)
Kerberos support
Smart card support
Internet Explorer Add-on Manager
Windows Firewall
Windows Security Center
Attachment Manager
Data Execution Prevention
Windows Firewall Exception List
Windows Firewall Application and Port Restrictions
Fresh visual design
UPDATED FEATURES
1. GDI+ powered graphics architecture
With the introduction of Windows XP, GDI was deprecated in favor of its successor, the C++ based GDI+ subsystem. GDI+ adds anti-aliased 2D graphics, textures, floating point coordinates, gradient shading, more complex path management, intrinsic support for modern graphics-file formats like JPEG and PNG, and support for composition of affine transformations in the 2D view pipeline. GDI+ uses ARGB values to represent color.
2. Start menu and Taskbar
With Windows XP, the taskbar and the Start button have been updated to support Fitt's law. To help the user access a wider range of common destinations more easily from a single location, the Start menu was expanded to two columns; the left column focuses on the user's installed applications, while the right column provides access to the user's documents, and system links which were previously located on the desktop. Links to the My Documents, My Pictures and other special folders are brought to the fore. The My Computer and My Network Places (Network Neighborhood in Windows 95 and 98) icons were also moved off the Desktop and into the Start menu, making it easier to access these icons while a number of applications are open.
3. Windows Explorer
There are significant changes made to Windows Explorer in Windows XP, both visually and functionally. Microsoft focused especially on making Windows Explorer more discoverable and task-based, as well as adding a number of features to reflect the growing use of a computer as a “digital hub”.
4. Task pane and navigation pane
The task pane is displayed on the left side of the window instead of the traditional folder tree view when the navigation pane is turned off. It presents the user with a list of common actions and destinations that are relevant to the current directory or file(s) selected. For instance, when in a directory containing mostly pictures, a set of “Picture tasks” is shown, offering the options to display these pictures as a slide show, to print them, or to go online to order prints.
5. SEARCH
Microsoft introduced animated “Search Companions” in an attempt to make searching more engaging and friendly; the default character is a puppy named Rover, with three other characters (Merlin the magician, Earl the surfer, and Courtney) also available. These search companions powered by Microsoft Agent technology, bear a great deal of similarity to Microsoft Office’s Office Assistants, even incorporating “tricks” and sound effects. However, search companion can be turned off and the user can revert to using classic search.

PATCHING
PATCHING
A patch is a piece of software designed to fix problems with, or update a computer program or its supporting data. This includes fixing security vulnerabilities and other bugs, and improving the usability or performance. Though meant to fix problems, poorly designed patches can sometimes introduce new problems . For example, software regression.
A software regression is a software bug which makes a feature stop functioning as intended after a certain event (for example, a system upgrade, system patching or a change to daylight saving time). A software performance regression is a situation where the software still functions correctly, but performs slowly or uses more memory when compared to previous versions.Patch management is the process of using a strategy and plan of what patches should be applied to which systems at a specified time.
Programmers publish and apply patches in various forms. Because proprietary software authors withhold their source code, their patches are distributed as binary executables instead of source. This type of patch modifies the program executable—the program the user actually runs—either by modifying the binary file to include the fixes or by completely replacing it.Patches can also circulate in the form of source code modifications. In these cases, the patches consist of textual differences between two source code files. These types of patches commonly come out of open source projects. In these cases, developers expect users to compile the new or changed files themselves.


__________________________________________________________________________________
Operating System
What is an Operating System?An operating system is a program that acts as an intermediary between the user of a computer and the computer hardware. It is an important part of almost every computer system. It is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. It also provides a basis for application programs.The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner. An operating system is a program that manages the computer hardware. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.
How Operating System Works?
For a computer to start running—for instance, when it is powered up or rebooted — it needs to have an initial program to run. This initial program or Bootstrap program, tends to be simple. Typically, it is stored in Read-Only Memory (ROM) such as EEPROM within the computer hardware.Booting ProcessBecause operating systems take up so much memory, they must be stored on your hard drive until they can be loaded into random access memory (RAM). The Bootstrap program must know how to load the operating system and to start executing that system. When you turn on your computer, your PC’s BIOS (Basic Input Output System) places a small amount of operating system code into RAM. As a result, the reminder of the operating system is loaded into memory. The operating system then starts executing the first process, such as “init”, and waits for some event to occur.KERNELThe kernel is the part of the operating system that deals with your hardware. As the user, you never work with the kernel itself. You must interact with it through a shell program.SHELLThe shell program is the visual setting you see when you use your computer. It is also the part of the operating system where users can issue commands to the computer. Some operating systems use a command line interface that allows you to type in specific commands. Others have a graphical user interface (GUI). GUIs use windows, menus, and icons to help you control your computer. Some operating systems have a variety of GUIs, allowing users to select the one they want.
Examples of Operating System
1. Microsoft Windows

Microsoft Windows is a family of proprietary operating systems that originated as an add-on to the older MS-DOS operating system for the IBM PC. Modern versions are based on the newer Windows NT kernel that was originally intended for OS/
2. Windows runs on x86, x86-64 and Itanium processors. Earlier versions also ran on the Alpha, MIPS, Fairchild (later Intergraph) Clipper and PowerPC architectures (some work was done to port it to the SPARC architecture).
If you got this one right then congratulations you most likely have a pulse and a few brain cells. Any version of windows counts by the way so if you said DOS (I know, it’s not really Windows but most people don’t make the distinction), 3.1, 9x, Me (shudder), NT, 2000, XP or Vista you get a point. If you didn’t guess Windows then you probably have never scene a computer and thus must be quite mystified that you can read glowing letters on a magic box. Be careful, because as you are reading this last sentence I am stealing your soul and putting it in a little shiny mirror. Oh, too late, you finished reading and I now have your soul. That is what you get for looking at magic glowing boxes. Now be gone before I try and convince you to buy some water front property from a Nigerian banker who made all of his money helping orphans while selling male growth hormones so he could get his college degree online with 0% financing.
2. Macintosh
Mac OS X is a line of computer operating systems developed, marketed, and sold by Apple Inc., and since 2002 has been included with all new Macintosh computer systems. It is the successor to Mac OS 9, the final release of the "classic" Mac OS, which had been Apple's primary operating system since 1984.
Well, you are reading this on a site called Apple Matters so I would be kind of worried if you missed this one. So, give yourself another point if you guessed Mac OS Classic, System 7-9.2, OS X or if you said Lisa. Actually, if you said Lisa give yourself a pat on the back and congratulate yourself for knowing your Apple history. If you missed this one chances are good that you are not too adept at using computers. Also you might have also missed the iPod craze as well.
3. Palm
I almost didn’t count this one but decided at the last minute to let it squeak by. If you have ever used a PDA before chances are good you have seen a Palm Pilot. You probably have even played with one in the store. And maybe you even still have one in a desk drawer somewhere. If so, go dig it out, dust it off and put it on eBay. And while you are at it give yourself a point.
4. Linux
If you said Linux or could name any of the distros then you are either a geek or related to one. Redhat, Ubuntu, Knoppix, Slackware, Debian all count and deserve one point. If you just said Linux then you still get the point on a technicality.
5. UNIX
Solaris counts. So does AIX, the BSDs or any other Unix variant you can name. So give yourself a point for remembering one of the oldest and most stable operating systems ever made.
6. BeOS/ZETA
If you know your Apple history then you will remember that an ex-Apple employee created the Be Operating System and latter hoped to sell it back to Apple so it could serve as the core of their next generation OS. However BeOS would ultimately not be chosen and, after a few years of languishing, eventually die. However in recent times it has been resurrected, born anew as ZETA. You are quite the technophile if you guessed either of these so give yourself a point as you marvel at the depth of your tech knowledge.
7. NextStep
After leaving Apple Steve Jobs went on to found NEXT, a company that sold its own hardware (Black Box) and its own OS (NextStep). If you are up to date on your Apple history then you already know that when Jobs returned he brought NextStep with him and that it eventually morphed into OS X. Chalk up another point for remembering Steve’s other other other company.
8. OS/2 Warp/eComStation
IBM created OS/2 and hoped that it would compete effectively against Windows. It did not. Racked with many flaws it still has managed to survive in some businesses though it never made its way onto the consumer desktop. Like BeOS it has been revitalized these last few years and given the new name of eComStation. If you remember the old name or the new give yourself a point.
9. Sendla
Now we are getting to the obscure operating systems. If you have ever heard of Sendla then chances are in the top 1% of news-reading geeks and readily deserve your point. If you are one of the 9 people who actually use Sendla then you get two points along with my condolences.
10. Amiga
If you are over 35 and have an attic you might find an Amiga in there if you look closely. If you do find one then please recycle it and use the nickel you get back to buy a piece of gum. Other uses for these machines include door stops, boat anchors and shot gun targets.

11. Plan 9Ken
Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. Plan 9 has yet to become as popular as Unix derivatives, but it has an expanding community of developers. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.