Itanium

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

-


Intel, with partner Hewlett-Packard, developed a next generation 64-bit processor architecture called IA-64 (the 80x86 design was renamed IA-32) - the first implementation was named Itanium. Itanium core processor is not binary compatible with X86 processors, instead it has a separate compatibility unit in hardware to provide IA32 compatibility.

Itanium only allow memory operands in load and store operations. As Itanium was a 64-bit processor so could address memory up to 4 GByte of RAM. The processor uses Explicitly Parallel Instruction Computing (EPIC), in which the compiler lines up instructions for parallel execution. Features were added to ensure compatibility with both Intel x86 and HP UNIX applications.

During development, it was widely expected that this would become the dominant processor architecture for servers, workstations, and perhaps even desktops, bumping the ubiquitous x86 architecture, and providing an industry-standard architecture across an unprecedented range of computing platforms, but it didn’t happen so. The Itanium processor was specifically designed to provide a very high level of parallel processing, to enable high performance without requiring very high clock frequencies (which can lead to excessive power consumption and heat generation).


Key strengths of the Itanium architecture include, Up to 6 instructions/cycle: The Itanium processor can handle up to 6 simultaneous 64-bit instructions per clock cycle, and the dual-core version can support up to two software threads per core, Extensive xecution resources per core: 256 application registers (128 general purpose, 128 floating point) and 64 predicate registers, Large cache: 24MB in the dual-core version (12MB per core), providing data to each core at up to 48GB/s, Large address space: 50-bit physical / 64-bit virtual, Small, energy-efficient core: Since Itanium relies on the compiler for scheduling instructions for parallel throughput (other architectures rely on runtime optimization within the processor itself), it has fewer transistors in each core. This may be an advantage in current and future multi-core designs.


Itanium Microprocessor.

Pentium D

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

-


The Pentium D is a series of microprocessors that was introduced by Intel at the spring 2005 Intel Developer Forum. A 9xx-series Pentium D package contains two Pentium 4 dies, unlike other multi-core processors (including the Pentium D 8xx-series) that place both cores on a single die. The Pentium D was the first announced multi-core CPU (along with its more expensive twin, the Pentium Extreme Edition) from any manufacturer intended for desktop computers. Intel underscored the significance of this introduction by predicting that by the end of 2006 over 70% of its shipping desktop CPUs would be multi-core.

Historically, processor manufacturers have responded to the demand for more processing power primarily by delivering faster processor speeds. However, the challenge of managing power and cooling requirements for today’s powerful processors has prompted a reevaluation of this approach to processor design. With heat rising incrementally faster than the rate at which signals move through the processor, known as clock speed, an increase in performance can create an even larger increase in heat. The answer is multi-core microprocessor.

For example, by moving from a single high-speed core, which generates a corresponding increase in heat, to multiple slower cores, which produce a corresponding reduction in heat, enterprises can potentially improve application performance while reducing their thermal output. A multi-core microprocessor is one which combines two or more independent processors into a single package, often a single integrated circuit (IC); to be more specific it has more than one execution unit with in a single integrated circuit.

A dual-core device contains only two independent microprocessor execution units, as shown in the figure below. In general, multi-core microprocessors allow a computing device to exhibit some form of thread-level parallelism (TLP) without including multiple microprocessors in separate physical packages. This form of TLP is often known as chip-level multiprocessing, or CMP. The Pentium D 820 runs in at 2.8GHz, is dual-core, its highlights are; it features two 16KB data caches in ddition to data cache, each core includes an Execution Trace Cache that stores up to 12 K decoded micro-ops in the order of program execution, Streaming SIMD Extensions 3(SSE3) significantly accelerates performance of digital media applications and includes additional integer and cache ability instructions that may improve other aspects of performance, Execute Disable Bit feature combined with a supported operating system, allows memory to be marked as executable and nonexecutable and if code attempts to run in non-executable memory the processor raises an error to the operating system, it also has internal performance counters for performance monitoring and event counting and it also includes a thermal monitor feature that allows motherboards to be more cost effective. Analysts have speculated that the clock rate race between Intel and AMD is largely over, with no more exponential gains in clock rate likely. Instead, as long as Moore's Law holds true, it is expected that the increasing number of transistors that chipmakers can incorporate into their CPUs will be used to increase CPU throughput through other methods, such as adding cores.


Pentium D Microprocessor

Itanium 2

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

-


The Itanium 2 is an IA-64 microprocessor developed jointly by Hewlett- Packard (HP) and Intel, and introduced on July 8, 2002. The first Itanium 2 processor (code-named McKinley) was substantially more powerful than the original Itanium processor, roughly doubling performance, and providing competitive performance across a range of data- and compute-intensive workloads. Several generations of Itanium 2 processors have followed. The Itanium 2 processor architecture is, dubbed Explicitly Parallel Instruction Computing (EPIC).

It is theoretically capable of performing roughly 8 times more work per clock cycle than other CISC and RISC architectures due to its Parallel Computing Micro-architecture. However, performance is heavily dependent on software compilers and their ability to generate code which efficiently uses the available execution units of the processor.

All Itanium 2 processors to date share a common cache hierarchy. They have 16 KB of Level 1 instruction cache and 16 KB of Level 1 data cache. The L2 cache is unified (both instruction and data) and is 256 KB. The Level 3 cache is also unified and varies in size from 1.5 MB to 24 MB. In an interesting design choice, the L2 cache contains sufficient logic to handle semaphore operations without disturbing the main ALU. The latest Itanium processor, however, features a split L2 cache, adding adedicated 1MB L2 cache for instructions and thereby effectively growing the original 256 KB L2 cache, which becomes a dedicated data cache. Most systems sold by enterprise server vendors that contain 4 or more processor sockets use proprietary Non-Uniform Memory Access (NUMA) architectures that supersede the more limited front side bus of 1 and 2 CPU socket servers.

The Itanium 2 bus is occasionally referred to as the Scalability Port, but much more frequently as the McKinley bus. It is a 200 MHz, 128-bit wide, double pumped bus capable of 6.4 GB/s — more than three times the bandwidth of the original Itanium bus, known as the Merced bus. In 2004, Intel released processors with a 266 MHz bus,
increasing bandwidth to 8.5 GB/s. In early 2005, processors with a 10.6 GB/s, 333 MHzbus were released.


Itanium 2 Microprocessor.

Microprocessor - 80286

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

The 80286 is called the second generation of microprocessor, it is more advanced to the 80186. This is the first Intel microprocessor offering multitasking and virtual memory. It is a 16-bit processor capable of addressing up to 16 MB (as it had an address but of 24-bits) of RAM and could also work with virtual memory (1GB). . It had a prefetch queue of 6 instructions. The 286 is the first “real” processor. It introduced the concept of protected mode and real mode. To ensure proper operation, we must protect the operating system and all other programs and the shared resources. The approach taken by many operating systems provide hardware support that allows us to differentiate among various modes of operation. A bit called the mode bit is added to the hardware of the computer to indicate the current mode. With the mode bit, we are able to distinguish between a task that is executed on behalf of the operating system and one that is executed on behalf of the user. The dual mode of operation provides us with the means for protecting the operating system from errant users. This is accomplished by designing some of the machine instructions that may cause harm as “privileged instructions”, these instructions are executed only in monitor mode. The 286 had an extra register called the Machine Status Register (MSW) whose lower nibble (containing D3 D2 D1 D0) defined the mode of operation and moreover it uses a four level memory protection which is an extension of the user/supervisory (protected/real) mode concept. It also had on-chip Memory Management Unit (MMU). This is also the first Intel processor that could run all of the software written for its predecessor. It has 134,000 transistors and could run at 6 to 12.5MHz

Pentium PRO

Posted by Harisinh | Posted in | Posted on 4:32 AM

0

-


Towards the end of 1995 the Pentium Pro was announced. This Pentium introduced a new socket (Socket 8), utilizing 387 pins. The Pro series included ability to run multiple instructions in one cycle, could execute instructions out of order, and had dynamic branch prediction, as well as speculative execution. Also included was an
impressive cache arrangement. For programmers, the Pro looks like a classic CISC CPU, while internally the CPU is very RISC oriented in design.

This 3.3 Volt CPU (3.1V at 150 MHz) was designed with a 32-bit operating system (OS) such as Windows NT in mind. While the Pro had Level 1 cache in the CPU, its real forte was the integrated Level 2 cache which allowed upwards of 1MB of cache to reside inside the CPU packaging to run at processor speed. This really improved performance in SMP based system boards. The Pro chip was the first chip to be offered in the AT or the ATX format.

The ATX format was preferred, as the Pro consumed more than 25 W of power, which generated a fair amount of heat. There were several major improvements of Pentium pro over Pentium, for example it had a superscalar architecture (microprocessor architecture containing more than one execution unit), 12-stage super pipeline, internal micro-ops similar to RISC like instructions and internal thermal protection. This microprocessor could be clocked to 200.00 MHz and consisted of 5.5 million transistors.


Pentium PRO Microprocessor

Microprocessor - 80386

Posted by Harisinh | Posted in | Posted on 4:32 AM

0

-


The 80386 is the first popular 32-bit microprocessor. IA-32 first appeared with the 80386 processor, but the architecture was by no means completely new. IA-32’s 8-bit predecessor first appeared in the Datapoint 2200 programmable terminal, released in 1971.

It wasn’t just an evolutionary product in Intel’s growing family of microprocessors; it was revolutionary. It is a 32-bit chip that contained 275,000 transistors, could process five million instructions per second, and could run all popular operating systems, including Windows. It is also “multitasking,” meaning it could run multiple programs at the same time.

It has a pre-fetch queue length of 16 bytes. It has extensive memory management capabilities. It incorporates a sophisticated technique known as paging, in addition to the segmentation technique, for achieving virtual memory. The 80386 provided a new mode, virtual 8086 mode, in which real-mode programs could run while the processor was in protected mode. To support the concept of virtual memory to a grater extend it also has on-chip address translation unit. This, combined with a more flexible segmentation scheme and a larger addressable memory space (32 bits rather than 24, bring the total addressable memory to 4GB from 16MB and a virtual address space of 64 TB) has made 80386 protected mode the mode of choice for all modern operating systems.

Later IA-32 implementations have not made significant changes or enhancements to protected mode. IA-32 adds the extended registers EAX, ECX, EDX, EBX, EBP, ESP, ESI, EDI, EIP, and EFLAGS, as well as two additional, segment registers FS and GS. Originally all registers were special-purpose. For example, AX is originally an accumulator and could only be used as such. IA-32 lifted many of the restrictions on register usage, but some remain. For example, some instructions assume that a pointer in the EBX register is relative to the segment indexed by DS. In practice, 6 registers are available for generalpurpose use, far fewer than the number available in the ARM or IA-64 architectures.

The practical result of this register pressure is that IA-32 programs tend to make more frequent use of the stack for temporary storage. The 80386 has automatic self-test this feature is known as ‘Built-In-Self-Test’ (BIST). The BIST tests approximately one-half of the 80386 which includes the internal control ROM. After successful completion of BIST, the 80386 forms reset sequence after which it will start from the reset vector.



Microprocessor - 80386

Pentium 2

Posted by Harisinh | Posted in | Posted on 4:45 AM

0

-



This CPU had remarkable performance. The challenge Intel faced was the cost of production creating the Pro chip. The built-in L2 cache had a high failure rate at Intel fabrication plants. They came up with the Pentium II (P2). Intel began by separating the processor, and cache of the Pentium Pro, mounting them together on the circuit board with a big heat sink.

Then by dropping the whole assembly onto the system board, using a Single Edge Contact (SEC) with 242 pins in the slot, and adding the 57 MMX (Multimedia extension) micro-code instructions, then Intel had the Pentium II. This way, defective cache modules don't force throwing out of a perfectly good CPU, because of a bad cache. And to further improve cache yields, the Pentium II ran cache at half the speed of the CPU.


Pentium II uses the Dynamic Execution Technology, which consists of three different facilities; Multiple branch prediction predicts program execution through several branches, accelerating the flow of work to the processor, Dataflow Analysis creates an optimized, reordered schedule f instructions by analyzing data dependencies between instructions and Speculative Execution carries out instructions speculatively thereby ensures that the multiple execution unit remains busy, boosting overall performance.

Pentium II includes data integrity and reliability features such as Error Correction Code (ECC), Fault Analysis, Recovery and Functional Redundancy Checking for both system and L2 cache buses. The pipelined Floating-Point Unit (FPU) supports the 32-bit and 64-bit formats specified in IEEE standard 754, as well as an 80-bit format. Parityprotected address/request and response system bus signals with a retry mechanism for high data integrity and reliability. An on-die diode monitors the die temperature.

A thermal sensor located on the motherboard can monitor the die temperature of the Pentium II processor for thermal management purposes. This microprocessor could work at clock rates of 300MHz and is made up of 7.5 million transistors.


Pentium 2 Microprocessor.

Pentium 3

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

-


Similar to Pentium II, the Pentium III processor also uses a Dynamic Execution micro-architecture: a unique combination of multiple branch prediction, data flow analysis, and speculative execution. The Pentium III has two major differences with Pentium II: Improved MMX and Processor serial number feature.

The improved MMX has totally 70 instructions enabling advanced imaging, 3D streaming audio and video, and speech recognition for enhanced Internet Experience: technology instructions for enhanced media and communication performance. Additionally, Streaming SIMD (single-instruction, multiple data) Extensions for enhanced floating point and 3-D application performance.

It also consisted of Internet Streaming SIMD Extensions which consisted of 70 instructions and includes single instruction, multiple data for loating-point, additional SIMD integer and cacheability control instructions. Data Pre-fetch Logic anticipates the data needed by the application programs and pre-loads into the Advanced Transfer Cache increasing performance.


The processor has multiple low power states such as Sleep, and Deep to conserve power during idle times. The system bus runs at 100MHz and 133MHzallowing for a 33% increase in available bandwidth to the processor. The Processor Serial Number extends the concept of processor identification by providing a 96-bit software accessible processor number that may be used by applications to identify a system. Applications include membership authentication, data backup/restore protection, removable storage data protection, and managed access to files.

Pentium 3 Microprocessor.

Pentium 4

Posted by Harisinh | Posted in | Posted on 4:33 AM

0

-


The Pentium 4 processor is Intel’s microprocessor that was introduced at 1.5GHz in November of 2000. It implements the new Intel NetBurst micro-architecture that features significantly higher clock rates and world-class performance. It includes several important new features and innovations that will allow the Intel pentium 4 processor to deliver industry-leading performance for the next several years. The Pentium 4 processor is designed to deliver performance across applications
where end users can truly appreciate and experience its performance.

For example, it allows a much better user experience in areas such as Internet audio and streaming video, image processing, video content creation, speech recognition, 3D applications and games, multi-media and multi-tasking user environments. The Pentium 4 processor enables realtime MPEG2 video encoding and near real-time MPEG4 encoding, allowing efficient video editing and video conferencing. It delivers world-class performance on 3D applications and games.

It adds 144 new 128-bit Single Instruction Multiple Data (SIMD) instructions called SSE2 (Streaming SIMD Extension 2) that improves performance for multi-media, content creation, scientific, and engineering applications. Intel NetBurst micro-architecture of the Pentium 4 processor has four main sections: the in-order front end, the out-of-order execution engine, the integer and floating-point execution units, and the memory subsystem.

The Pentium 4 processor has a 20-stage misprediction pipeline while the 6 micro-architecture has a 10-stage misprediction (This pipeline covers the cycles it takes a processor to recover from a branch that went a different direction than the early fetch hardware predicted at the beginning of the machine pipeline) pipeline. By dividing the pipeline into smaller pieces, doing less work during each pipeline stage (fewer gates of logic), the clock rate can be a lot higher. The Pentium 4 processor has a system bus with 3.2 G-bytes per second of bandwidth. This high bandwidth is a key enabler for applications that stream data from memory.

This bandwidth is achieved with a 64-bit wide bus capable of transferring data at a rate of 400MHz. It uses a source-synchronous protocol that quad-pumps the 100MHz bus to give 400 million data transfers per second. It has a split-transaction, deeply pipelined protocol to allow the memory subsystem to overlap many simultaneous requests to actually deliver high memory bandwidths in a real system. The bus protocol has a 64- byte access length. The Pentium 4 processor has 42 million transistors implemented on Intel’s 0.18u CMOS process, with six levels of aluminum interconnect.


Pentium 4 Microprocessor.

CIDR - Classless Interdomain Routing

Posted by Harisinh | Posted in | Posted on 12:52 PM

0

---------------------

CIDR is the standard that specifies the details of both classless addressing and an associated routing scheme. Accordingly, the name is slightly inaccurate designation because CIDR specifies addressing as well as routing. The original IPv4 model built on network classes was a useful mechanism for allocating identifiers (netid and hostid) when the primary users of the Internet were academic and research organisations. But, this mode proved insufficiently flexible and inefficient as the Internet grew rapidly to include gateways into corporate enterprises with complex networks. By September 1993, it was clear that the growth in Internet users would require an interim solution while the details of IPv6 were being finalised. The resulting proposal
was submitted as RFC 1519 titled ‘Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy.’ CIDR is classless, representing a move away fromthe original IPv4 network class model. CIDR is concerned with interdomain routing rather
than host identification. CIDR has a strategy for the allocation and use of IPv4 addresses,
rather than a new proposal.

CGI - Common Gatway Interface

Posted by Harisinh | Posted in | Posted on 1:25 PM

0

-


A dynamic document is created by a Web server whenever a browser requests the document. When a request arrives, the Web server runs an application program that creates the dynamic document.

Common Gateway Interface (CGI) is a technology that creates and handles dynamic documents.

CGI is a set of standards that defines how a dynamic document should be written, how the input data should be supplied to the program and how the output result should be used.

CGI is not a new language, but it allows programmers to use any of several languages such as C, C++, Bourne Shell, Korn Shell or Perl.A CGI program in its simplest form is code written in one of the languages supporting the CGI.


This is what all about the CGI for the internet.
Enjoy.....

FTP - File Transfer Protocol

Posted by Harisinh | Posted in | Posted on 1:25 PM

0

-


File Transfer Protocol (FTP) is the standard mechanism provided by TCP/IP for copying a file from one host to another. The FTP protocol is defined in RFC959. It is further defined in RFC 2227, 2640, 2773 for updated documentation.

In transferring files from one system to another, two systems may have different ways to represent text and data. Two systems may have different directory structures. All of these problems have been solved by FTP in a very simple and elegant way.

FTP differs from other client–server applications in that it establishes two connections between the hosts. One connection is used for data transfer (port 20), he other for control information (port 21). The control connection port remains open during the entire FTP session and is used to send control messages and client commands between the client and server. A data connection is established using an ephemeral port.

The data connection is created each time a file is transferred between the client and server. Separation of commands and data transfer makes FTP more efficient. FTP allows the client to specify whether a file contains text (ASCII or EBCDIC character sets) or binary integers. FTP requires clients to authorise themselves by sending a log name and password to the server before requesting file transfers.

Since FTP is used only to send and receive files, it is very difficult for hackers to exploit.

IGMP - Internet Group Management Protocol

Posted by Harisinh | Posted in | Posted on 1:06 PM

0

---------------------

The Internet Group Management Protocol (IGMP) is used to facilitate the simultaneous transmission of a message to a group of recipients. IGMP helps multicast routers to maintain a list of multicast addresses of groups. ‘Multicasting’ means sending of the same message to more than one receiver simultaneously. When the router receives a message with a destination address that matches one on the list, it forwards the message, converting the IP multicast address to a physical multicast address. To participate in IP on a local network, the host must inform local multicast routers. The local routers contact other multicast routers, passing on the membership information and establishing route. IGMP has only two types of messages: report and query. The report message is sent from the host to the router. The query message is sent from the router to the host.

A router sends in an IGMP query to determine if a host wishes to continue membership in a group. The query message is multicast using the multicast address 244.0.0.1. The report message is multicast using a destination address equal to the multicast address being reported. IP addresses that start with 1110(2) are multicast addresses. Multicast addresses are class D addresses. The IGMP message is encapsulated in an IP datagram with the protocol value of two. When the message is encapsulated in the IP datagram, the value of TTL must be one. This is required because the domain of IGMP is the LAN. The multicast backbone (MBONE) is a set of routers on the Internet that supports multicasting. MBONE is based on the multicasting capability of IP. Today MBONE uses the services of UDP at the transport layer.

This is what all about the IGMP. The protocols are the set of rules for working. Enjoy.....

HTTP - Hypertext Transfer Protocol

Posted by Harisinh | Posted in | Posted on 1:06 PM

0

-------------------


The protocol used to transfer a Web page between a browser and a Web server is known as Hypertext Transfer Protocol (HTTP). HTTP operates at the application level. HTTP is a protocol used mainly to access data on the World Wide Web. HTTP functions like a combination of FTP and SMTP. It is similar to FTP because it transfers files, while HTTP is like SMTP because the data transferred between the client and the server looks like SMTP messages. However, HTTP differs from SMTP in the way that SMTP messages are stored and forwarded; HTTP messages are delivered immediately. As a simple example, a browser sends an HTTP GET command to request a Web page from a server. A browser contacts a Web server directly to obtain a page. The browser begins with a URL, extracts the hostname section, uses DNS to map the name
into an equivalent IP address, and uses the IP address to form a TCP connection to the server. Once the TCP connection is in place, the browser and Web server use HTTP to communicate. Thus, if the browser sends a request to retrieve a specific page, the server responds by sending a copy of the page.

A browser requests a Web page, and the server transfers a copy to the browser. HTTP also allows transfer from a browser to a server. HTTP allows browsers and servers to negotiate details such as the character set to be used during transfers. To improve response time, a browser caches a copy of each Web page it retrieves. HTTP allows a machine
along the path between a browser and a server to act as a proxy server that caches Web pages and answers a browser’s request from its cache. Proxy servers are an important part of the Web architecture because they reduce the load on servers. In summary, a browser and server use HTTP to communicate. HTTP is an applicationlevel protocol with explicit support for negotiation, proxy servers, caching and persistent connections.

This is what all about the HTTP. The protocols are set of rules for working. Enjoy.....

HTML - Hypertext Markup Language

Posted by Harisinh | Posted in | Posted on 1:06 PM

0

---------------------

The browser architecture is composed of the controller and the interpreters to display a Web document on the screen. The controller can be one of the protocols such as HTTP, FTP, Gopher or TELNET. The interpreter can be HTML or Java, depending on the type of document.
The Hypertext Markup Language (HTML) is a language used to create Web pages. A markup language such as HTML is embedded in the file itself, and formatting instructions are stored with the text. Thus, any browser can read the instructions and format the text according to the workstation being used. Suppose a user creates formatted text on a Macintosh computer and stores it in a Web page, so another user who is on an IBM computer is not able to receive the Web page because the two computers are using different formatting procedures. Consider a case where different word processors use different techniques or procedures to format text. To overcome these difficulties, uses only ASCII characters for both main text and formatting instructions. Therefore, every computer can receive the whole document as an ASCII document. Web page.

Web page consists of two parts: the head and body. The head is the first part of a Web page. The head contains the file of the page and other parameters that the browser will use. The body contains the actual content of a page. The body includes the text and tags (marks). The text is the information contained in a page, whereas the tags define the appearance of the document.

Tags :
====

Tags are marks that are embedded into the text. Every HTML tag is a name followed by an optional list of attributes. An attribute is followed by an equals sign (=) and the value of the attribute. Some tags are used alone; some are used in pairs. The tags used in pairs are called starting and ending tags. The starting tag can have attributes and values. The ending tag cannot have attributes or values, but must have a slash before the name. An example of starting and ending tags is shown below :

< attribute =" Value" attribute =" Value"> (Starting tag)
<> (Ending tag)
A tag is enclosed in two angled brackets like and usually comes in pairs as and . The starting tag starts with the name of the tag, and the ending tag starts with a backslash followed by the name of the tag. A tag can have a list of attributes, each of which can be followed by an equals sign and a value associated with the attribute.

This is what all about the HTML. The protocols are set of rule for working.

ICMP - Internet Control Messege Protocol

Posted by Harisinh | Posted in | Posted on 12:57 PM

1

Internet Control Message Protocol (ICMP) The ICMP is an extension to the Internet Protocol which is used to communicate between a gateway and a source host, to manage errors and generate control messages. The Internet Protocol (IP) is not designed to be absolutely reliable. The purpose of control messages (ICMP) is to provide feedback about problems in the communication environment, not to make IP reliable. There are still no guarantees that a datagram will be delivered or a control message will be returned. Some datagrams may still be undelivered without any report of their loss. The higher-level protocols that use TCP/IP must implement their own reliability procedures if reliable communication is required. IP is an unreliable protocol that has no mechanisms for error checking or error control. ICMP was designed to compensate for this IP deficiency. However, ICMP does not correct errors, simply reports them. ICMP uses the source IP address to send the error message to the source of the datagram. ICMP messages consist of error-reporting messages and query messages. The error-reporting messages report problems that a router or a destination host may encounter when it processes an IP packet. In addition to error reporting, ICMP can diagnose some network problems through the query messages. The query messages (in pairs) give a host or a network manager specific information from a router or another host


This is all about ICMP its workin rules and way to working. Enjoy.....

IP Versions

Posted by Harisinh | Posted in | Posted on 12:57 PM

0

---------------------


The evolution of TCP/IP technology has led on to attempts to solve problems that improve service and extend functionalities. Most researchers seek new ways to develop and extend the improved technology, and millions of users want to solve new networking problems and improve the underlying mechanisms. The motivation behind revising the protocols arises from changes in underlying technology: first, computer and network hardware continues to evolve; second, as programmers invent new ways to use TCP/IP, additional protocol support is needed; third, the global Internet has experienced huge growth in size and use. This section examines a proposed revision of the Internet protocol which is one of the most significant engineering efforts so far. The network layer protocol is currently IPv4. IPv4 provides the basic communication mechanism of the TCP/IP suite. Although IPv4 is well designed, data communication has evolved since the inception of IPv4 in the 1970s. Despite its sound design, IPv4 has some deficiencies that make it unsuitable for the fast-growing Internet. The IETF decided to assign the new version of IP and to name it IPv6 to distinguish it from the current IPv4. The proposed IPv6 protocol retains many of the features that contributed to the success of IPv4. In fact, the designers have characterised IPv6 as being basically the same as IPv4 with a few modifications: IPv6 still supports connectionless delivery, allows the sender to choose the size of a datagram, and requires the sender to specify the maximum number of hops a datagram can make before being terminated. In addition, IPv6 also retains most of IPv4’s options, including facilities for fragmentation and source routing. IP version 6 (IPv6), also known as the Internet Protocol next generation (IPng), is the new version of the Internet Protocol, designed to be a full replacement for IPv4. IPv6 has an 128-bit address space, a revised header format, new options, an allowance for extension, support for resource allocation and increased security measures. However, due to the huge number of systems on the Internet, the transition from IPv4 to IPv6 cannot occur at once. It will take a considerable amount of time before every system in the Internet can move from IPv4 to IPv6. RFC 2460 defines the new IPv6 protocol.

IPv6 differs from IPv4 in a number of significant ways :
--------------------------------------------------------
• The IP address length in IPv6 is increased from 32 to 128 bits.
• IPv6 can automatically configure local addresses and locate IP routers to reduce configuration and setup problems.
• The IPv6 header format is simplified and some header fields dropped. This new header format improves router performance and make it easier to add new header types.
• Support for authentication, data integrity and data confidentiality are part of the IPv6 architecture.
• A new concept of flows has been added to IPv6 to enable the sender to request special
handling of datagrams.


This is all about the IP Version. Its working description and all that. Enjoy......

IPv 6 Addressing

Posted by Harisinh | Posted in | Posted on 12:57 PM

0

-----------------


In December 1995, the network working group of IETF proposed a longer-term solution for specifying and allocating IP addresses. RFC 2373 describes the address space associated with the IPv6. The biggest concern with Internet developers will be the migration process from IPv4 to IPv6. IPv4 addressing has the following shortcoming: IPv4 was defined when the Internet was small and consisted of networks of limited size and complexity. It offered two layers of address hierarchy (netid and hostid) with three address formats (class A, B and C) to accommodate varying network sizes. Both the limited address space and the 32-bit address size in IPv4 proved to be inadequate for handling the increase in the size of the routing table caused by the immense numbers of active hosts and servers. IPv6 is designed to improve upon IPv4 in each of these areas. IPv6 allocates 128 bits for addresses. Analysis shows that this address space will suffice to incorporate flexible hierarchies and to distribute the responsibility for allocation and management of the IP address space. Like IPv4, IPv6 addresses are represented as string of digits (128 bits or 32 hex digits) which are further broken down into eight 16-bit integers separated by colons (:). The basic representation takes the form of eight sections, each two bytes in length. xx:xx:xx:xx:xx:xx:xx:xx
where each xx represents the hexadecimal form of 16 bits of address. IPv6 uses hexadecimal colon notation with abbreviation methods.


This is all about the IPv 6 Addressing. Enjoy......

CIDR - Classless Interdomain Routing

Posted by Harisinh | Posted in | Posted on 12:52 PM

0

---------------------


CIDR is the standard that specifies the details of both classless addressing and an associated routing scheme. Accordingly, the name is slightly inaccurate designation because CIDR specifies addressing as well as routing. The original IPv4 model built on network classes was a useful mechanism for allocating identifiers (netid and hostid) when the primary users of the Internet were academic and research organisations. But, this mode proved insufficiently flexible and inefficient as the Internet grew rapidly to include gateways into corporate enterprises with complex networks. By September 1993, it was clear that the growth in Internet users would require an interim solution while the details of IPv6 were being finalised. The resulting proposal
was submitted as RFC 1519 titled ‘Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy.’ CIDR is classless, representing a move away from the original IPv4 network class model. CIDR is concerned with interdomain routing rather than host identification. CIDR has a strategy for the allocation and use of IPv4 addresses, rather than a new proposal.


This is all about the CIDR - Classless Interdomain Routiong.

SNMP - Simple Network Management Protocol

Posted by Harisinh | Posted in | Posted on 11:04 PM

0

The Simple Network Management Protocol (SNMP) is an application-layer protocol that facilitates the exchange of management information between network devices. It is part of the TCP/IP protocol suite. SNMP enables network administrators to manage network performance, find and solve network problems and plan for network growth. There are two versions of SNMP, v1 and v2. Both versions have a number of features in common, but SNMP v2 offers enhancements, such as additional protocol operations. SNMP version 1 is described in RFC 1157 and functions within the specifications of the Structure of Management Information (SMI). SNMP v1 operates over protocols such as the User Datagram Protocol (UDP), IP, OSI Connectionless Network Service (CLNS), Apple-Talk Datagram-Delivery Protocol (DDP), and Novell nternet Packet Exchange (IPX). SNMP v1 is widely used and is the de facto network management protocol in the Internet community. SNMP is a simple request–response protocol. The network management system issues a request, and managed devices return responses. This behaviour is implemented using one of four protocol operations: Get, GetNext, Set and Trap. The Get operation is used by the network management system (NMS) to retrieve the value of one or more object instances from an agent. If the agent responding to the Get operation cannot provide values for all the object instances in a list, it provides no values. The GetNext operation is used by the NMS to retrieve the value of the next object instance in a table or list within an agent. The Set operation is used by the NMS to set the values of object instances within an agent. The Trap operation is used by agents to asynchronously inform the NMS of a significant event. SNMP version 2 is an evolution of the SNMP v1. It was originally published as a set of proposed Internet Standards in 1993. SNMP v2 functions within the specifications of the Structure of Management Information (SMI) which defines the rules for describing management information, using Abstract Syntax Notation One (ASN.1). The Get, GetNext and Set operation used in SNMP v1 are exactly the same as those used in SNMP v2. However, SNMP v2 adds and enhances some protocol operations. SNMP v2 also defines two new protocol operations: GetBulk and Inform. The GetBulk operation is used by the NMS to efficiently retrieve large blocks of data, such as multiple rows in a table. GetBulk fills a response message with as much of the requested data as will fit. The Inform operation allows one NMS to send trap information to another NMS and receive a response. SNMP lacks any authentication capabilities, which results in vulnerability to a variety of security threats. These include masquerading, modification of information, message sequence and timing modifications and disclosure.


This is what all about the SNMP. How its working and its description.


Enjoy......

DNS - Converting IP Addresses

Posted by Harisinh | Posted in | Posted on 11:04 PM

0

----------------------------|
CONVERRTING IP ADDRESSING : |
----------------------------|


To identify an entity, TCP/IP protocols use the IP address, which uniquely identifies the connection of a host to the Internet. However, users prefer a system that can map a name to an address or an address to a name. This section considers converting a name to an address and vice versa, mapping between high-level machine names and IP addresses.


Domain Name System (DNS) :
--------------------------

The Domain Name System (DNS) uses a hierarchical naming scheme known as domain names. The mechanism that implements a machine name hierarchy for TCP/IP is called DNS.

DNS has two conceptual aspects: the first specifies the name syntax and rules for delegating authority over names, and the second specifies the implementation of a distributed computing system that efficiently maps names to addresses.
DNS is a protocol that can be used in different platforms. In the Internet, the domain name space is divided into three different sections: generic domain, country domain and inverse domain.

A DNS server maintains a list of hostnames and IP addresses, allowing computers that query them to find remote computers by specifying hostnames rather than IP addresses. DNS is a distributed database and therefore DNS servers can be configured to use a sequence of name servers, based on the domains in the name being looked for.


This is what all about the DNS. The Converting IP addressing in the internet world. How it working and its description.


Enjoy.....

RIP - OSPF - BGP - Routing Protocols

Posted by Harisinh | Posted in | Posted on 11:04 PM

0

Routing Protocols :


An Internet is a combination of networks connected by routers. When a datagram goes from a source to a destination, it will probably pass through many routers until it reaches the router attached to the destination network. A router chooses the route with the shortest metric. The metric assigned to each network depends on the type of protocol. The Routing Information Protocol (RIP) is a simple protocol which treats each network as equals. The Open Shortest Path First (OSPF) protocol is an interior routing protocol that is becoming very popular. Border Gateway Protocol (BGP) is an inter-autonomous system routing protocol which first appeared in 1989.


1 Routing Information Protocol (RIP) :


The Routing Information Protocol (RIP) is a protocol used to propagate routing information inside an autonomous system. Today, the Internet is so large that one routing protocol cannot handle the task of updating the routing tables of all routers. Therefore, the Internet is divided into autonomous systems.

An Autonomous System (AS) is a group of networks and routers under the authority of a single administration. Routing inside an autonomous system is referred to as interior routing. RIP and OSPF are popular interior routing protocols used to update routing tables in an AS. Routing between autonomous systems is referred to as exterior routing. RIP is a popular protocol which belongs to the interior routing protocol. It is a very simple protocol based on istance vector routing, which uses the Bellman–Ford algorithm for calculating routing tables.

A RIP routing table entry consists of a destination network address, the hop count to that destination and the IP address of the next router. RIP uses three timers: the periodic timer controls the advertising of the update message, the expiration timer governs the validity of a route, and the garbage collection timer advertises the failure of a route. However, two shortcomings associated with the RIP protocol are slow convergence and instability.


2 Open Shortest Path First (OSPF) :


The Open Shortest Path First (OSPF) is a new alternative to RIP as an interior routing protocol. It overcomes all the limitations of RIP. Link-state routing is a process by which each router shares its knowledge about its neighbourhood with every other router in the area. OSPF uses link-state routing to update the routing tables in an area, as opposed to RIP which is a distance-vector protocol. The term distance-vector means that messages sent by RIP contain a vector of distances (hop counts).

In reality, the important difference between two protocols is that a link-state protocol always converges faster than a distancevector protocol.
OSPF divides an autonomous system (AS) in areas, defined as collections of networks, hosts and routers. At the border of an area, area border routers summarise information about the area and send it to other areas. There is a special area called the backbone among the areas inside an autonomous system. All the areas inside an AS must be connected to the backbone whose area identification is zero. OSPF defines four types of links: pointto- point, transient, stub and virtual.

Point-to-point links between routers do not need an IP address at each end. Unnumbered links can save IP addresses. A transient link is a network with several routers attached to it. A stub link is a network that is connected to only one router. When the link between two routers is broken, the administration may create a virtual link between them using a longer path that probably goes through several routers. A simple authentication scheme can be used in OSPF. OSPF uses multicasting rather than broadcasting in order to reduce the load on systems not participating in OSPF.

Distance-vector Multicast Routing Protocol (DVMRP) is used in conjunction with IGMP to handle multicast routing. DVMRP is a simple protocol based on distance-vector outing and the idea of MBONE. Multicast Open Shortest Path First (MOSPF), an extension to the OSPF protocol, adds a new type of packet (called the group embership packet) to the list of link state advertisement packets. MOSPF also uses the configuration of MBONE and islands.


3 Border Gateway Protocol (BGP) :


BGP is an exterior gateway protocol for communication between routers in different autonomous systems. BGP is based on a routing method called path-vector routing. Refer to RFC 1772 (1991) which describes the use of BGP in the Internet. BGP version 3 is defined in RFC 1267 (1991) and BGP version 4 in RFC 1467 (1993). Path-vector routing is different from both distance-vector routing and link-state outing. Path-vector routing does not have the instability nor looping problems of distance-vector routing.

Each entry in the routing table contains the destination network, the next router and the path to reach the destination. The path is usually defined as an ordered list of autonomous systems that a packet should travel through to reach the destination. BGP is different from RIP and OSPF in that BGP uses TCP as its transport protocol.

There are four types of BGP messages: open, update, keepalive and notification. BGP detects the failure of either the link or the host on the other end of the TCP connection by sending a keepalive message to its neighbour on a regular basis.


This is what all about the Routing protocols. How they are working and theirs description.


Enjoy.....

TELNET - Remote System Programs

Posted by Harisinh | Posted in | Posted on 11:04 PM

0

Remote System Programs :

High-level services allow users and programs to interact with automated services on remote machines and with remote users. This section describes programs that include Rlogin (Remote login) and TELNET (TErminaL NETwork).

1 TELNET :


TELNET is a simple remote terminal protocol that allows a user to log on to a computer across an Internet. TELNET establishes a TCP connection, and then passes keystrokes from the user’s keyboard directly to the remote computer as if they had been typed on a keyboard attached to the remote machine.

TELNET also carries output from the remote machine back to the user’s screen. The service is called transparent because it looks as if the user’s keyboard and display attach directly to the remote machine. TELNET client software allows the user to specify a remote machine either by giving its domain name or IP address.

TELNET offers three basic services. First, it defines a network virtual terminal that provides a standard interface to remote systems. Second, TELNET includes a mechanism that allows the client and server to negotiate options. Finally, TELNET treats both ends of the connection symmetrically.



2 Remote Login (Rlogin) :

Rlogin was designed for remote login only between UNIX hosts. This makes it a simpler protocol than TELNET because option negotiation is not required when the operating system on the client and server are known in advance. Over the past few years, Rlogin has also ported to several non-UNIX environments. RFC 1282 specifies the Rlogin protocol.

When a user wants to access an application program or utility located on a remote machine, the user performs remote login. The user sends the keystrokes to the terminal driver where the local operating system accepts the characters but does not interpret them.

The characters are sent to the TELNET client, which transforms the characters into a universal character set called etwork Virtual Terminal (NVT) characters and delivers them to the local TCP/IP stack. The commands or text (in NVT form) travel through the Internet and arrive at the TCP/IP stack at the remote machine.

Here the characters are delivered to the operating system and passed to the TELNET server, which changes the characters to the corresponding characters understandable by the remote computer.


This is what all about the Remote System Login.(TELNET and Remote Login). How its working and their description.


Enjoy.....

FTP - File Transfer

Posted by Harisinh | Posted in | Posted on 10:33 PM

0

-


The file transfer application allows users to send or receive a copy of a data file. Access to data on remote files takes two forms: whole-file copying and shared online access. FTP is the major file transfer protocol in the TCP/IP suite. TFTP provides a small, simple alternative to FTP for applications that need only file transfer. NFS provides online shared file access.


1 File Transfer Protocol (FTP) :

File Transfer Protocol (FTP) is the standard mechanism provided by TCP/IP for copying a file from one host to another. The FTP protocol is defined in RFC959. It is further defined in RFC 2227, 2640, 2773 for updated documentation. In transferring files from one system to another, two systems may have different ways to represent text and data. Two systems may have different directory structures. All of these problems have been solved by FTP in a very simple and elegant way. FTP differs from other client–server applications in that it establishes two connections between the hosts. One connection is used for data transfer (port 20), the other for control information (port 21). The control connection port remains open during the entire FTP session and is used to send control messages and client commands between the client and server. A data connection is established using an ephemeral port. The data connection is created each time a file is transferred between the client and server. Separation of commands and data transfer makes FTP more efficient. FTP allows the client to specify whether a file contains text (ASCII or EBCDIC character sets) or binary integers. FTP requires clients to authorise themselves by sending a log name and password to the server before requesting file transfers. Since FTP is used only to send and receive files, it is very difficult for hackers to exploit.


2 Trivial File Transfer Protocol (TFTP) :

Trivial File Transfer Protocol (TFTP) is designed to simply copy a file without the need for all of the functionalities of the FTP protocol. TFTP is a protocol that quickly copies files because it does not require all the sophistication provided in FTP. TFTP can read or write a file for the client. Since TFTP restricts operations to simple file transfer and does not provide authentication, TFTP software is much smaller than FTP.


3 Network File System (NFS) :


The Network File System (NFS), developed by Sun Microsystems, provides online shared file access that is transparent and integrated. The file access mechanism ccepts the request and automatically passes it to either the local file system software or to the NFS client, depending on whether the file is on the local disk or on a remote machine. When it receives a request, the client software uses the NFS protocol to contact the appropriate server on a remote machine and performs the requested operation. When the remote server replies, the client software returns the results to the application program. Since Sun’s Remote Procedure Call (RPC) and eXternal Data Representation (XDR) are defined separately from NFS, programmers can use them to build distributed applications.


This is what all about the FTP. How it governs the ruls for the file trasfer and the its working style.


Enjoy.....

SMTP - Simple Mail Transfer Protocol

Posted by Harisinh | Posted in | Posted on 10:33 PM

0

-


The Simple Mail Transfer Protocol (SMTP) provides a basic e-mail facility. SMTP is the protocol that transfers e-mail from one server to another. It provides a mechanism for transferring messages among separate servers. Features of SMTP include mailing lists, return receipts and forwarding. SMTP accepts the incoming message and makes use of TCP to send it to an SMTP module on another servers. The target SMTP module will make use of a local electronic mail package to store the incoming message in a user’s mailbox. Once the SMTP server identifies the IP address for the recipient’s e-mail server, it sends the message through standard TCP/IP routing procedures.

Since SMTP is limited in its ability to queue messages at the receiving end, it’s usually used with one of two other protocols, POP3 or IMAP, that let the user save messages in a server mailbox and download them periodically from the server. In other words, users typically use a program that uses SMTP for sending e-mail and either POP3 or IMAP for receiving messages that have been received for them at their local server. Most mail programs (such as Eudora) let you specify both an SMTP server and a POP server. On UNIX-based systems, sendmail is the most widely-used SMTP server for e-mail. Earlier versions of sendmail presented many security risk problems. Through the years, however, sendmail has become much more secure, and can now be used with confidence.

A commercial package, sendmail, includes a POP3 server and there is also a version for Windows NT. Hackers often use different forms of attack with SMTP. A hacker might create a fake e-mail message and send it directly to an SMTP server. Other security risks associated with SMTP servers are denial-of-service attacks. Hackers will often flood an SMTP server with so many e-mails that the server cannot handle legitimate e-mail traffic. This type of flood effectively makes the SMTP server useless, thereby denying service to legitimate e-mail users. Another well-known risk of SMTP is the sending and receiving of viruses and Trojan horses. The information in the header of an e-mail message is easily forged. The body of an e-mail message contains standard text or a real message.

Newer e-mail programs can send messages in HTML format. No viruses and Trojans can be contained within the header and body of an e-mail message, but they may be sent as attachments. The best defence against malicious attachments is to purchase an SMTP server that scans all messages for viruses, or to use a proxy server that scans all incoming and outgoing messages. SMTP is usually implemented to operate over TCP port 25. The details of SMTP are in RFC 2821 of the Internet Engineering Task Force (IETF). An alternative to SMTP that is widely used in Europe is X.400.


This is what all about the SMTP. How it sends the message from sender to receiver and how it guides the message to reach at the receiver.


Enjoy.....

MIME - Multipurpose Internet Mail Extension

Posted by Harisinh | Posted in | Posted on 10:33 PM

0

-


The Multipurpose Internet Mail Extension (MIME) is defined to allow transmission of non-ASCII data via e-mail. MIME allows arbitrary data to be encoded in ASCII and then transmitted in a standard e-mail message.


SMTP cannot be used for languages that are not supported by seven-bit ASCII characters. It cannot also be used for binary files or to send video or audio data. MIME is a supplementary protocol that allows non-ASCII data to be sent through SMTP. MIME is a set of software functions that transforms non-ASCII data to ASCII data and vice versa.

This is what all about the MIME. How it works and its description.

Enjoy.....

POP 3 - Post Office Protocol Version 3

Posted by Harisinh | Posted in | Posted on 10:33 PM

0

-

The most popular protocol used to transfer e-mail messages from a permanent mailbox to a local computer is known as the Post Office Protocol version 3 (POP3). The user invokes a POP3 client, which creates a TCP connection to a POP3 server on the mailbox computer. The user first sends a login and a password to authenticate the session. Once authentication has been accepted, the user client sends commands to retrieve a copy of one or more messages and to delete the message from the permanent mailbox.

The messages are stored and transferred as text files in RFC 2822 standard format. Note that computers with a permanent mailbox must run two servers – an SMTP
server accepts mail sent to a user and adds each incoming message to the user’spermanent mailbox, and a POP3 server allows a user to extract messages from the mailbox and delete them. To ensure correct operation, the two servers must coordinate with the mailbox so that if a message arrives via SMTP while a user extracts messages via POP3, the mailbox is left in a valid state.

This is what all about the POP - Version 3. How it works and its comparision with our government postoffice.


Enjoy.....

IMAP - Internet Message Access Protocol

Posted by Harisinh | Posted in | Posted on 10:33 PM

0

-


The Internet Message Access Protocol (IMAP) is a standard protocol for accessing email from your local server. IMAP4 (the latest version) is a client–server protocol in which e-mail is received and held for you by your Internet server. You (or your e-mail client) can view just the subject and the sender of the e-mail and then decide whether to download the mail.

You can also create, manipulate and delete folders or mailboxes on the server, delete messages or search for certain e-mails. IMAP requires continual access to the server during the time that you are working with your mail.
A less sophisticated protocol is Post Office Protocol 3 (POP3). With POP3, your mails
saved for you in your mailbox on the server. When you read your mail, it is immediately downloaded to your computer and no longer maintained on the server.

IMAP can be thought of as a remote file server. POP can be thought of as a ‘storeand-
forward’ service. POP and IMAP deal with receiving e-mail from your local server and are not to be confused with SMTP, a protocol for transferring e-mail between points on the Internet. You send e-mail by SMTP and a mail handler receives it on your recipient’s behalf. Then the mail is read using POP or IMAP.


This is what all about the IMAP. How its working and how the account holders get their messages.


Enjoy.....

Java - Supports on the Wab

Posted by Harisinh | Posted in | Posted on 1:25 PM

0

-

Java is a combination of a high-level programming language, a run-time environment and
a library that allows a programmer to write an active document and a browser to run it.


It can also be used as a stand-alone program without using a browser. However, Java is
mostly used to create a small application program of an applet.


This is what all about the Java Applet. How working on the internet.
Enjoy.....

DES - Data Encryption Standard

Posted by Harisinh | Posted in | Posted on 11:04 PM

1

-

In the late 1960s, IBM initiated a Lucifer research project, led by Horst Feistel, for computer cryptography. This project ended in 1971 and LUCIFER was first known as a block cipher that operated on blocks of 64 bits, using a key size of 128 bits. Soon after this IBM embarked on another effort to develop a commercial encryption scheme, which was later called DES.

This research effort was led by Walter Tuchman. The outcome of this effort was a refined version of Lucifer that was more resistant to cryptanalysis. In 1973, the National Bureau of Standards (NBS), now the National Institute of Standards and Technology (NIST), issued a public request for proposals for a national cipher standard. IBM submitted the research results of the DES project as a possible candidate.

The NBS requested the National Security Agency (NSA) to evaluate the algorithm’s security and to determine its suitability as a federal standard. In November 1976, the Data Encryption Standard was adopted as a federal standard and authorised for use on all unclassified US government communications. The official description of the standard, FIPS PUB 46, Data Encryption Standard was published on 15 January 1977.


The DES algorithm was the best one proposed and was adopted in 1977 as the Data Encryption Standard even though there was much criticism of its key length (which had changed from Lucifer’s original 128 bits to 64 bits) and the design criteria for the internal structure of DES, i.e., S-box. Nevertheless, DES has survived remarkably well over 20 years of intense cryptanalysis and has been a worldwide standard for over 18 years. The recent work on differential cryptanalysis seems to indicate that DES has a very strong internal structure.

Since the terms of the standard stipulate that it be reviewed every five years, on 6 March 1987 the NBS published in the Federal Register a request for comments on the second five-year review. The comment period closed on 10 December 1992. After much debate, DES was reaffirmed as a US government standard until 1992 because there was still no alternative for DES. The NIST again solicited a review to assess the continued adequacy of DES to protect computer data.

In 1993, NIST formally solicited comments on the recertification of DES. After reviewing many comments and technical inputs, NIST recommend that the useful lifetime of DES would end in the late 1990s. In 2001, the Advanced Encryption Standard (AES), known as the Rijndael algorithm, became an FIPSapproved advanced symmetric cipher algorithm. AES will be a strong advanced algorithm in lieu of DES.

The DES is now a basic security device employed by worldwide organisations. herefore,it is likely that DES will continue to provide network communications, stored data, passwords and access control systems.


This is what all about the DES. Data Encryption Technique and its some little history about it.


Enjoy.....

Computer Security Requires a Comprehensive and Integrated Approach

Posted by Harisinh | Posted in | Posted on 12:45 AM

0

-


Providing effective computer security requires a comprehensive approach that considers a variety of areas both within and outside of the computer security field. This comprehensive approach extends throughout the entire information life cycle.

1 Interdependencies of Security Controls :

To work effectively, security controls often depend upon the proper functioning of other controls. In fact, many such interdependencies exist. If appropriately chosen, managerial, operational, and technical controls can work together synergistically. On the other hand, without a firm understanding of the interdependencies of security controls, they can actually undermine one another. For example, without proper training on how and when to use a virus-detection package, the user may apply the package incorrectly and, therefore, ineffectively. As a result, the user may mistakenly believe that their system will always be virus-free and may inadvertently spread a virus. In reality, these interdependencies are usually more complicated and difficult to ascertain.

2 Other Interdependencies :

The effectiveness of security controls also depends on such factors as system management, legal issues, quality assurance, and internal and management controls. Computer security needs to work with traditional security disciplines including physical and personnel security. Many other important interdependencies exist that are often unique to the organization or system environment. Managers should recognize how computer security relates to other areas of systems and organizational management.

3Computer Security Should Be Periodically Reassessed :

Computers and the environments they operate in are dynamic. System technology and users, data and information in the systems, risks associated with the system and, therefore, security requirements are ever-changing. Many types of changes affect system security: technological developments (whether adopted by the system owner or available for use by others); connecting to external networks; a change in the value or use of information; or the emergence of a new threat. In addition, security is never perfect when a system is implemented. System users and operators discover new ways to intentionally or unintentionally bypass or subvert security. Changes in the system or the environment can create new vulnerabilities. Strict adherence to procedures is rare, and procedures become outdated over time. All of these issues make it necessary to reassess the security of computer systems.


Here i just changed my topic from Internet Hardwares and Protocols to Computer Security.


Enjoy.....

Computer Security Supports the Mission of the Organization

Posted by Harisinh | Posted in | Posted on 12:45 AM

0

-


The purpose of computer security is to protect an organization's valuable resources, such as information, hardware, and software. Through the selection and application of appropriate safeguards, security helps the organization's mission by protecting its physical and financial resources, reputation, legal position, employees, and other tangible and intangible assets. Unfortunately, security is sometimes viewed as thwarting the mission of the organization by imposing poorly selected, bothersome rules and procedures on users, managers, and systems. On the contrary, well-chosen security rules and procedures do not exist for their own sake they are put in place to protect important assets and thereby support the overall organizational mission. Security, therefore, is a means to an end and not an end in itself. For example, in a private- sector business, having good security is usually secondary to the need to make a profit. Security, then, ought to increase the firm's ability to make a profit. In a public-sector agency, security is usually secondary to the agency's service provided to citizens. Security, then, ought to help improve the service provided to the citizen.To act on this, managers need to
understand both their organizational mission and how each information
system supports that mission. After a system's role has been defined, the
security requirements implicit in that role can be defined. Security can then
be explicitly stated in terms of the organization's mission.
The roles and functions of a system may not be constrained to a single
organization. In an interorganizational system, each organization benefits from
securing the system. For example, for electronic commerce to be successful,
each of the participants requires security controls to protect their resources.
However, good security on the buyer's system also benefits the seller; the
buyer's system is less likely to be used for fraud or to be unavailable or
otherwise negatively affect the seller. (The reverse is also true.)


Here i just changed my topic from computer hardwares and the protocols to the computer security.


Enjoy.....

Computer Security is an Integral Element of Sound Management

Posted by Harisinh | Posted in | Posted on 12:45 AM

0

-


Information and computer systems are often critical assets that support the mission of an organization. Protecting them can be as critical as protecting other organizational resources, such as money, physical assets, or employees. However, including security considerations in the management of information and computers does not completely eliminate the possibility that these assets will be harmed. Ultimately, organization managers have to decide what the level of risk they are willing to accept, taking into account the cost of security controls.

As with many other resources, the management of information and computers may transcend organizational boundaries. When an organization's information and computer systems are linked with external systems, management's responsibilities also extend beyond the organization.

his may require that management (1) know what general level or type of security is employed on the external system(s) or (2) seek assurance that the external system provides adequate security for the using organization's needs.


Here i just changed my topic from Internet Hardwares and Protocols to Computer Security.


Enjoy.....

Computer Security should Be Cost-Effective

Posted by Harisinh | Posted in | Posted on 12:45 AM

1

-


Computer Security Should Be Cost-Effective. The costs and benefits of security should be carefully examined in both monetary and nonmonetary terms to ensure that the cost of controls does not exceed expected benefits. Security should be appropriate and proportionate to the value of and degree of reliance on the computer systems and to the severity, probability and extent of potential harm. Requirements for security vary, depending upon the particular computer system.

In general, security is a smart business practice. By investing in security measures, an organization can reduce the frequency and severity of computer security-related losses. For example, an organization may estimate that it is experiencing significant losses per year in inventory through fraudulent manipulation of its computer system. Security easures, such as an improved access control system, may significantly reduce the loss. Moreover, a sound security program can thwart hackers and can reduce the requency of viruses. Elimination of these kinds of threats can reduce unfavorable publicity as well as increase morale and productivity. Security benefits, however, do have both direct and indirect costs.

Direct costs include purchasing, installing, and administering security measures, such as access control software or fire-suppression systems. Additionally, security measures can sometimes affect system performance, employee morale, or retraining requirements. All of these have to be considered in addition to the basic cost of the control itself. In many cases, these additional costs may well exceed the initial cost of the control (as is often seen,

for example, in the costs of administering an access control package). Solutions to security problems should not be chosen if they cost more, directly or indirectly, than simply tolerating the problem.


Here i just changed my topic from Internet Hardawers and Protocols to Computer Security.


Enjoy.....

Computer Security Responsibililies and Accountablility Should Be Made

Posted by Harisinh | Posted in | Posted on 12:45 AM

0

-


Computer Security Responsibilities and Accountability Should Be Made Explicit. The responsibilities and accountability of owners, providers, and users of computer systems and 10 other parties concerned with the security of computer systems should be explicit. The 11 12 assignment of responsibilities may be internal to an organization or may extend across organizational boundaries.

Depending on the size of the organization, the program may be large or small, even a collateral duty of another management official. However, even small organizations can prepare a document that states organization policy and makes explicit computer security responsibilities.

This element does not specify that individual accountability must be provided for on all systems. For example, many information dissemination systems do not require user identification and, therefore, cannot hold users accountable


Here i just changed my topic from Internet Hardwares and Protocols to Computer Security.


Enjoy.....

Risk Management - Selecting Safeguards

Posted by Harisinh | Posted in | Posted on 2:29 AM

0

-


A primary function of computer security risk management is the identification of appropriate controls. In designing (or reviewing) the security of a system, it may
be obvious that some controls should be added (e.g., because they are required by law or because they are clearly costeffective).

It may also be just as obvious that other controls may be too expensive (considering both monetary and nonmonetary factors). For example, it may be immediately apparent to a manager that closing and locking the door to a particular room that contains local area network equipment is a needed control, while posting a guard at the door would be too expensive and not user-friendly.

In every assessment of risk, there will be many areas for which it will not be obvious what kind of controls are appropriate. Even considering only monetary issues, such as whether a control would cost more than the loss it is supposed to prevent, the selection of controls is not simple. However, in selecting appropriate controls, managers need to consider many factors, including: organizational policy, legislation, and regulation; safety, reliability, and quality requirements; system performance requirements; timeliness, accuracy, and completeness requirements; the life cycle costs of security measures; technical requirements; and cultural constraints.

One method of selecting safeguards uses a "what if" analysis. With this method, the effect of adding various safeguards (and, therefore, reducing vulnerabilities) is tested to see what difference each makes with regard to cost, effectiveness, and other relevant factors, such as those listed above. Trade-offs among the factors can be seen. The analysis of trade-offs also supports the acceptance of residual risk, discussed below.

This method typically involves multiple iterations of the risk analysis to see how the proposed changes affect the risk analysis result.Another method is to categorize types of safeguards and recommend implementing them for various levels of risk. For example, stronger controls would be implemented on high-risk systems than on low-risk systems. This method normally does not require multiple iterations of the risk analysis.

As with other aspects of risk management, screening can be used to concentrate on the highestrisk areas. For example once could focus on risks with very severe consequences, such as a very high dollar loss or loss of life or on the threats that are most likely to occur.

This is what all about the selecting Safeguards in Risk Management.

Enjoy.....

Risk Management - Accept Residual Risk

Posted by Harisinh | Posted in | Posted on 2:29 AM

0

-


At some point, management needs to decide if the operation of the computer system is acceptable, given the kind and severity of remaining risks. Many managers do not fully understand computerbased risk for several reasons:

(1) the type of risk may be different from risks previously associated with the organization or function;

(2) the risk may be technical and difficult for a lay person to understand, or

(3) the proliferation and decentralization of computing power can make it difficult to identify key assets that may be at risk.


Risk acceptance, like the selection of safeguards, should take into account various factors besides those addressed in the risk assessment. In addition, risk acceptance should take into account the limitations of the risk assessment. (See the section below on uncertainty.) Risk acceptance is linked to the selection of safeguards since, in some cases, risk may have to be accepted because safeguards are too expensive (in either monetary or nonmonetary factors).

Within the federal government, the acceptance of risk is closely linked with the authorization to use a computer system, often called accreditation, discussed in Chapters 8 and 9. Accreditation is the acceptance of risk by management resulting in a formal approval for the system to become operational or remain so. As discussed earlier in this chapter, one of the two primary functions of risk management is the interpretation of risk for the purpose of risk acceptance.


Accept Residual Risk Management.

Risk Management - Implementing Controls and Monitoring Effectiveness

Posted by Harisinh | Posted in | Posted on 2:29 AM

0

-


Implementing Controls and Monitoring Effectiveness is small topic but it affects the more about your computer security policy. It is the concern to make your organization a secuire.

Merely selecting appropriate safeguards does not reduce risk; those safeguards need to be effectively implemented. Moreover, to continue to be effective, risk management needs to be an ongoing process.

This requires a periodic assessment and improvement of safeguards and reanalysis of risks. Chapter 8 discusses how periodic risk assessment is an integral part of the overall management of a system. (See especially the diagram on page 83.)

The risk management process normally produces security requirements that are used to design, purchase, build, or otherwise obtain safeguards or implement system changes.


I like to talk about this type of Risk Management topics. This topic is very small compare to others. If anybody has more knowledge about the Implementing Controls and Monitoring Effectiveness of Risk Management. You plz write comment. I am waiting for your reply.

Thanks.

Risk Management - Uncertainty Analysis

Posted by Harisinh | Posted in | Posted on 2:29 AM

0

-


Risk management often must rely on speculation, best guesses, incomplete data,
and many unproven assumptions. The uncertainty analysis attempts to document this
so that the risk management results can be used knowledgeably.

There are two primary sources of uncertainty in the risk management process:
(1) a lack of confidence or precision in the risk management model or methodology and (2) a lack of sufficient information to determine the exact value of the elements of the risk model, such as threat frequency, safeguard effectiveness, or consequences.


The risk management framework presented in this chapter is a generic description of risk management elements and their basic relationships. For a methodology to be useful, it should further refine the relationships and offer some means of screening information. In this process, assumptions may be made that do not accurately reflect the user's environment.

This is especially evident in the case of safeguard selection, where the number of relationships among assets, threats, and vulnerabilities can become unwieldy. The data are another source of uncertainty. Data for the risk analysis normally come from two sources: statistical data and expert analysis. Statistics and expert analysis can sound more authoritative than they really are.

There are many potential problems with statistics. For example, the sample may be too small, other parameters affecting the data may not be properly accounted for, or the results may be stated in a misleading manner. In many cases, there may be insufficient data. When expert analysis is used to make projections about future events, it should be recognized that the projection is subjective and is based on assumptions made (but not always explicitly articulated) by the expert.


Its all about Uncertainty Analysis in Risk Management.

Risk Management - Inerdependencies

Posted by Harisinh | Posted in | Posted on 2:29 AM

0

-


Risk management touches on every control and every chapter in this handbook. It is, however, most closely related to life cycle management and the security planning process. The requirement to perform risk management is often discussed in organizational policy and is an issue for organizational oversight. These issues are discussed in Cost Considerations.

We will discuss them later. The building blocks of risk management presented in this chapter can be used reatively to develop methodologies that concentrate expensive analysis work where it is most needed. Risk management can become expensive very quickly if an expansive boundary and detailed scope are selected. It is very important to use screening techniques, as discussed in this chapter, to limit the overall effort.

The goals of risk management should be kept in mind as a methodology is selected or developed. The methodology should concentrate on areas where identification of risk and the selection of cost-effective safeguards are needed. The cost of different methodologies can be significant.

A "back-of-the-envelope" analysis or high-medium-low ranking can often provide all the information needed. However, especially for the selection of expensive safeguards or the analysis of systems with unknown consequences, more in-depth analysis may be warranted.

This is what all about the Inerdependencies in Risk Management.

Threats : Fraud and Theft

Posted by Harisinh | Posted in | Posted on 1:29 AM

0

-


Computer systems can be exploited for both fraud and theft both by "automating" traditional methods of fraud and by using new methods. For example, individuals may use a computer to skim small amounts of money from a large number of financial accounts, assuming that small discrepancies may not be investigated. Financial systems are not the only ones at risk. Systems that control access to any resource are targets (e.g., time and attendance systems, inventory systems, school grading systems, and long-distance telephone systems).

Computer fraud and theft can be committed by insiders or outsiders. Insiders (i.e., authorized users of a system) are responsible for the majority of fraud. A 1993 InformationWeek/Ernst and Young study found that 90 percent of Chief Information Officers viewed employees "who do not need to know" information as threats.

The U.S. Department of Justice's Computer Crime Unit 25 contends that "insiders constitute the greatest threat to computer systems." Since insiders have 26 both access to and familiarity with the victim computer system (including what resources it controls and its flaws), authorized system users are in a better position to commit crimes. Insiders can be both general users (such as clerks) or technical staff members. An organization's former employees, with their knowledge of an organization's operations, may also pose a threat, particularly if their access is not terminated promptly.

In addition to the use of technology to commit fraud and theft, computer hardware and software may be vulnerable to theft. For example, one study conducted by Safeware Insurance found that $882 million worth of personal computers was lost due to theft in 1992.


Here its all about the little History of the Threats Froad and Thief.


Enjoy.....

Threats : A Brief Overview

Posted by Harisinh | Posted in | Posted on 1:29 AM

0

-


Computer systems are vulnerable to many threats that can inflict various types of damage resulting in significant losses. This damage can range from errors harming database integrity to fires destroying entire computer centers. Losses can stem, for example, from the actions of supposedly trusted employees defrauding a system, from outside hackers, or from careless data entry clerks.

Precision in estimating computer security-related losses is not possible because many losses are never discovered, and others are "swept under the carpet" to avoid unfavorable publicity. The effects of various threats varies considerably: some affect the confidentiality or integrity of data while others affect the availability of a system. This chapter presents a broad view of the risky environment in which systems operate today.

The threats and associated losses presented in this chapter were selected based on their prevalence andsignificance in the current computing environment and their expected growth. This list is not exhaustive, and some threats may combine elements from more than one area.

This overview of 19 many of today's common threats may prove useful to organizations studying their own threat environments; however, the perspective of this chapter is very broad. Thus, threats against particular systems could be quite different from those discussed here.

20 To control the risks of operating an information system, managers and users need to know the vulnerabilities of the system and the threats that may exploit them. Knowledge of the threat.

21 environment allows the system manager to implement the most cost-effective security measures. In some cases, managers may find it more cost-effective to simply tolerate the expected osses. Such decisions should be based on the results of a risk analysis


I am discussing all about the treads now.

Enjoy.....

Threats - Errors And Omissions

Posted by Harisinh | Posted in | Posted on 1:29 AM

0

-


Errors and omissions are an important threat to data and system integrity. These errors are caused not only by data entry clerks processing hundreds of transactions per day, but also by all types of users who create and edit data. Many programs, especially those designed by users for personal computers, lack quality control measures. However, even the most sophisticated programs cannot detect all types of input errors or omissions.

A sound awareness and training program can help an organization reduce the number and severity of errors and omissions. Users, data entry clerks, system operators, and programmers frequently make errors that contribute directly or indirectly to security problems. In some cases, the error is the threat, such as a data entry error or a programming error that crashes a system. In other cases, the errors create vulnerabilities. Errors can occur during all phases of the systems life cycle.

A long-term survey of computer-related economic losses conducted by Robert Courtney, a computer security consultant and former member of the Computer System Security and Privacy Advisory Board, found that 65 percent of losses to organizations were the result of errors and omissions. This 22 figure was relatively consistent between both private and public sector organizations. Programming and development errors, often called "bugs," can range in severity from benign to catastrophic.

In a 1989 study for the House Committee on Science, Space and Technology, entitled Bugs in the Program, the staff of the Subcommittee on Investigations and Oversight summarized the scope and severity of this problem in terms of government systems as follows: As expenditures grow, so do concerns about the reliability, cost and accuracy of ever-larger and more complex software systems. These concerns are heightened as computers perform more critical tasks, where mistakes can cause financial turmoil, accidents, or in extreme cases, death.23 Since the study's publication, the software industry has changed considerably, with measurable improvements in software quality.

Yet software "horror stories" still abound, and the basic principles and problems analyzed in the report remain the same. While there have been greatconcurrent growth in program size often eriously diminishes the beneficial effects of these
program quality enhancements. Installation and maintenance errors are another source of security problems. For example, an audit by the President's Council for Integrity and Efficiency (PCIE) in 1988 found that every one of the ten mainframe computer sites studied had installation and maintenance errors that introduced significant security vulnerabilities.


Here its all about the Treats Errors and Omission.


Enjoy.....

Threats - Industrial Espionage

Posted by Harisinh | Posted in | Posted on 1:29 AM

0

-

Industrial espionage is the act of gathering proprietary data from private companies or the government for the purpose of aiding another company(ies). Industrial espionage can be perpetrated either by companies seeking to improve their competitive advantage or by governments seeking to aid their domestic industries.

Foreign industrial espionage carried out by a government is often referred to as economic espionage. Since information is processed and stored on computer systems, computer security can help protect against such threats; it can do little, however, to reduce the threat of authorized employees selling that information. Industrial espionage is on the rise.

A 1992 study sponsored by the American Society for Industrial Security (ASIS) found that proprietary business information theft had increased 260 percent since 1985. The data indicated 30 percent of the reported losses in 1991 and 1992 had foreign involvement. The study also found that 58 percent of thefts were perpetrated by current or former employees.

The three most damaging types of stolen information were pricing 35 information, manufacturing process information, and product development and specification information. Other types of information stolen included customer lists, basic research, sales data, personnel data, compensation data, cost data, proposals, and strategic plans.36 Within the area of economic espionage, the Central Intelligence Agency has stated that the main objective is obtaining information related to technology, but that information on U.S. Government policy deliberations concerning foreign affairs and information on commodities, interest ates, and other economic factors is also a target.

The Federal Bureau of Investigation concurs hat 37 technology-related information is the main target, but also lists corporate proprietary information, such as negotiating positions and other contracting data, as a target.]


Here its all about the Industrial Espionage by Threat.


Enjoy.....

Threats : To Personal Privacy

Posted by Harisinh | Posted in | Posted on 1:29 AM

0

-


The accumulation of vast amounts of electronic information about individuals by governments, credit bureaus, and private companies, combined with the ability of computers to monitor, process, and aggregate large amounts of information about individuals have created a threat to individual privacy. The possibility that all of this information and technology may be able to be linked together has arisen as a specter of the modern information age.

This is often referred to as "Big Brother." To guard against such intrusion, Congress has enacted legislation, over the years, such as the Privacy Act of 1974 and the Computer Matching and Privacy Protection Act of 1988, which defines the boundaries of the legitimate uses of personal information collected by the government. The threat to personal privacy arises from many sources.

In several cases federal and state employees have sold personal information to private investigators or other "information brokers." One such case was uncovered in 1992 when the Justice Department announced the arrest of over two dozen individuals engaged in buying and selling information from Social Security Administration (SSA) computer files.

During the investigation, auditors learned that SSA 42 employees had unrestricted access to over 130 million employment records. Another investigation found that 5 percent of the employees in one region of the IRS had browsed through tax records of friends, relatives, and celebrities. Some of the employees used the nformation to 43 create fraudulent tax refunds, but many were acting simply out of curiosity. As more of these cases come to light, many individuals are becoming increasingly concerned about threats to their personal privacy.

A July 1993 special report in MacWorld cited polling data taken by Louis Harris and Associates showing that in 1970 only 33 percent of respondents were.


Enjoy.....

Microprocessor - 80486

Posted by Harisinh | Posted in | Posted on 4:32 AM

0

-


The 80486s were not groundbreaking in terms of a radically different design philosophy, like the 80386. It did have four new features that made the 80486 about twice as fast as the fastest 80386. The most talked about new features were a built-in cache, (when the processor speeds reached the 20-25 MHz vicinity, reasonably priced DRAM memory could no longer be accessed with zero wait bus cycles) and a built-in math co processor (increased the throughput for floating point operations). On average, the math co processor built into the 80486 yielded three times the greater performance than external 80387 Numeric Processing Unit (NPU). The speed difference between the 80386 and the 80486 made the Graphical User Interface (GUI) practical for everyday use.

A concept which is known as, Unified Internal Code/Data Cache is used in 80486. It incorporates the advantage of the external cache coupled with the fact that every time a memory request is fulfilled by the internal cache, one less bus cycle results. In addition, the access of data is much faster since the only delay would be to look up the data/code and to deliver it to the internal requester. This also frees up the bus for other bus masters. The address bus in 486 is bi-directional because of the presence of cache memory inside 486 (to enable cache invalidation). It also supports burst type of bus cycle which saves time during floating point operand fetch as well as cache memory fill.

Internal data conversion logic for both 8-bit subsystem and 16-bit subsystems; dynamic bus sizing supporting 8, 16 and 32 bit cycles. The 386 does not support direct interfacing of 8 bit subsystem. An external logic is needed for this purpose. The 486 incorporates several features in order to simplify the debugging process. The on-chip debugging aids of 486 are of three types: Breakpoint instruction, Single-set capability by Trap and Code and data breakpoint capability by means of debug register.

The 486 also has a parity generator and parity checker inside it, providing parity logic for Data bus ne parity bit for each data byte. This offers better reliability. It consists of 1.2 million transistors and could run at clock rates of 50MHz.


Mircoprocessor - 80486

Pentium

Posted by Harisinh | Posted in | Posted on 4:32 AM

0

-


Cyrix and AMD are out in the marketplace selling their CPUs and math co processors, calling them 80386, 80387, just like Intel's. As any one can guess, Intel is not happy about this. The firm was so mad; they went to court to stop their competitors but one cannot copyright or patent numbers, the judge says. So Intel runs a contest to ome up with a name for the 80586 that isn't a number.

Penta means five, and on the 19th of October 1992, the name Pentium was announced. Thus the Pentium began as the fifth generation of the Intel x86 architecture. The Pentium had an L2 cache from 256KB to 1MB, used a 50, 60 or 66MHz system bus and contained from 3.1 to 3.3 million transistors. As usual, the Pentium was backward compatible, while offering new features.

The revolutionary step in this CPU was twin data pipelines. This enabled the CPU to execute two instructions at the same time. This is known as super scalar technology, typically found in RISC based CPUs. The Pentium uses a 32-bit expansion bus, however the data bus is 64-bits which means the system memory is addressed at 64 bits at a time. This is an important distinction to remember when working with some types of RAM packaging.


Pentium Microprocessor.

Microprocessor 80186

Posted by Harisinh | Posted in | Posted on 3:30 AM

0

-


By 1982 Intel came up with 80186 and 80286, two products compatible with the 8086 and 8088. The 80186, designed by a team under the leadership of Dave Stamm, integrated onto the CPU a number of functions previously implemented in peripheral chips, producing higher reliability and faster operations speeds at less cost. It had a prefetch queue of 6 instructions.

It is suitable for high volume applications such as computer workstations, word processor and personal computers. When compared with the previous microprocessors on chip, clock oscillator, interrupt controller, two DMA channels (with all support), chip select logic with operating modes (iRMX186 for master mode and Non iRMX186 for slave mode this is similar to min and max modes in 8086), and three timers. Moreover there were ten extra instructions added to this microprocessor which are, PUSHA and POPA (to push and pop all the registers), IMUL destination, source, #immediate (an instruction of this type didn’t exist in the previous µp), SHIFT / ROTATE destination register, #immediate (this instruction could shift or rotate a register contents given number of times), INS / OUTS (for input and output of a string, ex- INS DS:DI and OUTS DS DS:SI), other three instructions were required for operating system (a concept which became very prominent and still is from the starting days of microprocessors) ENTER, LEAVE and BOUND.

It is made up of 134,000 MOS transistor to form 16-bit microprocessors with a 16-bit data bus, 20-bit address bus (could address only 1M byte of memory) and could work at clock rates of 4 and 6 MHz. This processor is called the first generation of microprocessors.


Microprocessor 80186.

Microprocessor - 4004

Posted by Harisinh | Posted in | Posted on 3:29 AM

0

-


Finally in 1971 the team of Ted Hoff, S. Mazor and F. Fagin develops the Intel 4004 microprocessor a “computer on a chip”. This 4004, is the world’s first commercially available microprocessor. This breakthrough invention powered the Busicom calculator and paved the way for embedding intelligence in inanimate objects as well as the personal computer.

Just four years later, in 1975, Fortune magazine said, “The microprocessor is one of those rare innovations that simultaneously cuts manufacturing costs and ads to the value and capabilities of the product. As a result, the microprocessor has invaded a host of existing products and created new products never before possible.” This single invention revolutionized the way computers are designed and applied. It put intelligence into “dumb” machines and distributed processing capability into previously undreamed applications.

The advent of intelligent machines based on microprocessors changed how we gather information, how we communicate, and how and where we work. In mid-1969 Busicom, a now-defunct Japanese calculator manufacturer, asked Intel to design a set of chips for a family of high-performance programmable calculators. Maracian E. “Ted” Hoff, an engineer who had joined Intel the previous year was assigned to the project. In its original design, the calculator required twelve chips, which Hoff considered to complex to be cost-effective. Furthermore, Intel’s small MOS staff was fully occupied with the 1101 (MOS type of static semiconductor memory) so the design resources were not available.

Hoff came up with a novel alternative: by reducing the complexity of the instructions and providing a supporting memory device, he could create a general-purpose information processor. The processor, he reasoned, could find a wide array of uses for which it could be modified by programs stored in memory. “Instead of making their device act like a calculator,” he recalled, “I wanted to make it function as a general purpose computer programmed to be a calculator.” To this end, Hoff and fellow engineers Federico Faggin and Stan Mazor came up with a design that involved four chips: a central processing unit (CPU) chip, a read-only memory (ROM) chip for the custom application programs, a random access memory (RAM) chip for processing data, and a shift register chip for input/output (I/O) port.

The CPU chip, though it then had no name, would eventually be called a microprocessor. Measuring one-eighth of an inch wide by one-sixth of an inch long and made up of 2,300 MOS transistors, Intel’s first microprocessor is equal in computing power to the first electronic computer, ENIAC, which filled 3000 cubic feet with 18,000 vacuum tubes. The 4004, as it is to be called, would execute 60,000 operations a second, with by today’s standards is primitive. It works at a clock rate of 108 KHz.


Microprocessor 4004.