From Seo Wiki - Search Engine Optimization and Programming Languages

Jump to: navigation, search

ALOHAnet, also known as ALOHA, was a pioneering computer networking system developed at the University of Hawaii. It was first deployed in 1970, and while the network itself is no longer used, one of the core concepts in the network is the basis for the widely used Ethernet.



One of the early computer networking designs, the ALOHA network was created at the University of Hawaii in 1970 under the leadership of Norman Abramson and others (including N. Gaarder and N. Weldon). The idea was to use low-cost amateur radio-like systems to create a computer network linking the far-flung campuses of the University. The original version of ALOHA used two distinct frequencies in a hub/star configuration, with the hub machine broadcasting packets to everyone on the "outbound" channel, and the various client machines sending data to the hub on the "inbound" channel. Data received was immediately re-sent, allowing clients to determine whether or not their data had been received properly. Any machine noticing corrupted data would wait a short time and then re-send the packet. This mechanism was also used to detect and correct for "collisions" created when two client machines both attempted to send a packet at the same time.

Like the ARPANET group, ALOHA was important because it used a shared medium for transmission. This revealed the need for more modern medium access control schemes such as CSMA/CD, used by Ethernet. Unlike the ARPANET where each node could only talk to a node on the other end of the wire, in ALOHA all nodes were communicating on the same frequency. This meant that some sort of system was needed to control who could talk at what time. ALOHA's situation was similar to issues faced by Ethernet (non-switched) and Wi-Fi networks.

This shared transmission medium system generated interest by others. ALOHA's scheme was very simple. Because data was sent via a teletype the data rate usually did not go beyond 80 characters per second. When two stations tried to talk at the same time, both transmissions were garbled. Then data had to be manually resent. ALOHA proved that it was possible to have a useful network without solving this problem, and this sparked interest in others, most significantly Bob Metcalfe and other researchers working at Xerox PARC. This team went on to create the Ethernet protocol.

The ALOHA protocol

The ALOHA protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.

The difference between Aloha and Ethernet on a shared medium is that Ethernet uses CSMA/CD, which broadcasts a jamming signal to notify all computers connected to the channel that a collision occurred, forcing computers on the network to reject their current packet or frame. The use of a jamming signal enables early release of the transmission medium where transmission delays dominate propagation delays, and is appropriate for many Ethernet variants. As Aloha was a wireless system, there were additional problems, such as the hidden node problem, which meant that protocols which work well on a small scale wired LAN would not always work. Even though the extent of the Hawaiian island network is about 400 km in diameter, propagation delays were almost certainly small in comparison with transmission delays, so the protocol used had to be one which was robust enough to cope.


File:Pure ALOHA1.svg
Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have collided.

The first version of the protocol (now called "Pure ALOHA") was quite simple:

  • If you have data to send, send the data
  • If the message collides with another transmission, try resending "later"

Note that the first step implies that Pure ALOHA does not check whether the channel is busy before transmitting. The critical aspect is the "later" concept: the quality of the backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and the predictability of its behavior.

To assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission of frames. (This discussion of Pure ALOHA's performance follows Tanenbaum [1].) First, let's make a few simplifying assumptions:

  • All frames have the same length.
  • Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a station keeps trying to send a frame, it cannot also be generating more frames to send.)
  • The population of stations attempts to transmit (both new frames and old frames that collided) according to a Poisson distribution.

Let "T" refer to the time needed to transmit one frame on the channel, and let's define "frame-time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over transmission-attempt amounts: that is, on average, there are G transmission-attempts per frame-time.

File:Pure ALOHA.svg
Overlapping frames in the pure ALOHA protocol. Frame-time is equal to 1 for all frames.

Consider what needs to happen for a frame to be transmitted successfully. Let "t" refer to the time at which we want to send a frame. We want to use the channel for one frame-time beginning at t, and so we need all other stations to refrain from transmitting during this time. Moreover, we need the other stations to refrain from transmitting between t-T and t as well, because a frame sent during this interval would overlap with our frame.

For any frame-time, the probability of there being k transmission-attempts during that frame-time is:

<math>\frac{G^k e^{-G}}{k!}</math>

The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for any pair of consecutive frame-times, the probability of there being k transmission-attempts during those two frame-times is:

<math>\frac{(2G)^k e^{-2G}}{k!}</math>

Therefore, the probability (<math>Prob_{pure}</math>) of there being zero transmission-attempts between t-T and t+T (and thus of a successful transmission for us) is:


The throughput can be calculated as the rate of transmission-attempts multiplied by the probability of success, and so we can conclude that the throughput (<math>S_{pure}</math>) is:


The maximum throughput is 0.5/e frames per frame-time (reached when G = 0.5), which is approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4% of the time is used for successful transmissions.

Slotted ALOHA

File:Slotted ALOHA.svg
Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are in the same slots.

An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced discrete timeslots and increased the maximum throughput. A station can send only at the beginning of a timeslot, and thus collisions are reduced. In this case, we only need to worry about the transmission-attempts within 1 frame-time and not 2 consecutive frame-times, since collisions can only occur during each timeslot. Thus, the probability of there being zero transmission-attempts in a single timeslot is:

<math>Prob_{slotted} = e^{-G}</math>

The throughput is:


The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is approximately 0.368 frames per frame-time, or 36.8%.

Other Protocols

It should be noted that Aloha's characteristics are still not much different from those experienced today by Wi-Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount of inherent inefficiency in these systems. For instance 802.11b sees about a 2-4 Mbit/s real throughput with a few stations talking, versus its theoretical maximum of 11 Mbit/s. It is typical to see these types of networks' throughput break down significantly as the number of users and message burstiness increase. For these reasons, applications which need highly deterministic load behavior often use token-passing schemes (such as token ring) instead of contention systems. For instance ARCNET is very popular in embedded applications. Nonetheless, contention based systems also have significant advantages, including ease of management and speed in initial communication.

Because Listen before send (CSMA - Carrier Sense Multiple Access), as used in the Ethernet, works much better than Aloha for all cases where all the stations can hear each other, Slotted Aloha is used on low bandwidth tactical Satellite communications networks by the US Military, subscriber based Satellite communications networks, and contactless RFID technologies.


Norman Abramson was a professor of engineering at Stanford, but was also an avid surfer. After visiting Hawaii in 1969, he inquired at the University of Hawaii if they were interested in hiring a professor of engineering. He joined the staff in 1970 and started working on a radio-based data communications system to connect the Hawaiian islands together, with funding from Larry Roberts.

By late 1970 the system was already in use, the world's first wireless packet-switched network. Abramson then managed to get an IMP from Roberts and connected ALOHAnet to the ARPANET on the mainland in 1972. It was the first time another network was connected to the ARPAnet, although others would soon follow.

Variants of the Aloha protocol (such as Slotted Aloha) appear later as the radio interface protocol in popular wireless data networks such as ARDIS, Mobitex, CDPD and GSM.


Prior to ALOHAnet, most computer communications tended to share similar features. The data to be sent was turned into an analog signal using a device similar to a modem, which would be sent over a known connection like a telephone line. The connection was point-to-point, and set up (typically) by manual control.

In contrast ALOHAnet was a true network. All of the computers "connected" to ALOHAnet could send data at any time without operator intervention, and any number of computers could be involved. Since the medium was a radio, there were no fixed costs so the channel was "left open" and could be used at any time.

Using a shared signal in this way leads to an important problem: If two systems on the network – known as nodes – send at the same time, both signals will be ruined. Some sort of mechanism needs to be in place to avoid this problem. There are a number of ways to do this.

One would be to use a different radio frequency for every node, a system known as frequency multiplexing. However this would require each node added to able to be "tuned in" by all of the other machines. Soon there would be hundreds of such frequencies, and radios capable of listening to this number of frequencies at the same time are very expensive.

Another solution is to have "time slots" into which each node is allowed to send, known as time division multiplexing. This is easier to implement because the nodes can continue to share a single radio frequency. On the downside if a particular node has nothing to send, their slot goes wasted. This leads to situations where the available time is largely empty and the one node with data to send has to do so very slowly just in case one of the other 100 nodes decides to send something.

ALOHAnet instead utilised a new solution to the problem, one that has since been developed to become the standard, carrier sense multiple access. In the Aloha system nodes which needed to send simply sent out their frames as soon as they were ready.

Normally this would mean that the first node to start using the radio would have it for as long as it wanted, which means the other nodes couldn't "get a word in edgewise". In order to avoid this problem the ALOHAnet made the nodes break down their messages into small packets, and send them one at a time with gaps between them. This allowed other nodes to send out their packets in between, so everyone could share the medium at the same time.

There is one last problem to consider: if two nodes attempt to start their broadcast at the same time, you'll have the same sorts of problems you would with any other system. In this case ALOHAnet invented a very clever solution. After sending any packet the nodes listened to see if their own message was sent back to them by a central hub. If they got their message back, they could move on to their next packet.

If instead they never got their packet back, that would mean that something prevented it from arriving at the hub – like a collision with another node's packet. In this case they simply waited a random time and tried again. Since each node picked a random time to wait, one of them would be the first to re-try, and the other nodes would then see that the channel was in use when they tried. Under most circumstances this would avoid collisions.

This sort of collision avoidance system has the advantage of allowing any one node to use the entire network's capability if no one else is using it. It also requires no "setup"; anyone can be hooked up and start talking without any additional information like the frequency or time slot to use.

On the downside, if the network gets busy the number of collisions can rise dramatically to the point where every packet will collide. For ALOHAnet the maximum channel utilisation was around 18%, and any attempts to drive the network over this would simply increase collisions, and the overall data throughput would actually decrease, a phenomenon known as congestion collapse.

With Slotted Aloha, a centralised clock sent out small clock tick packets to the outlying stations. Outlying stations were only allowed to send their packets immediately after receiving a clock tick. If there is only one station with a packet to send, this guarantees that there will never be a collision for that packet. On the other hand if there are two stations with packets to send, this algorithm guarantees that there will be a collision, and the whole of the slot period up to the next clock tick is wasted. With some mathematics, it is possible to demonstrate that this protocol does improve the overall channel utilisation, by reducing the probability of collisions by a half.

The relatively low utilisation turns out to be a small price to pay given the advantages. A slight modification of this system for wired networks by Bob Metcalfe improved collision avoidance on busy networks, and this became the standard for Ethernet. Today the technique is known as CSMA/CD, carrier sense, multiple access, collision detection.

Collision detection mechanisms are much more difficult to implement in wireless systems when compared to wired/cabled systems, and Aloha did not even attempt to check for collisions. In a wired system, it is possible to abandon the transmission of colliding packets by first detecting the collision, and then notifying the senders. This is not generally a feasible option in wireless systems, and was not attempted in Aloha.

The ALOHAnet itself was run using 9600 baud radio modems across Hawaii. The system uses two 100 kHz "channels" (slices of frequency), one known as the broadcast channel at 413.475 MHz, and the other the random access channel at 407.350 MHz. The network was a star, with a single central computer (a HP 2100) at the university receiving all messages on the random access channel, and then re-broadcasting them to all of the nodes on the broadcast frequency. This setup reduced the number of collisions possible, there could be no collisions at all on the broadcast frequency, for what that was worth. Later upgrades added repeaters that also acted as hubs, greatly increasing the area and total capability of the network.

Send and receive packets were identical. The packet had a 32-bit header with a 16-bit parity check, followed by up to 80 bytes of data and another 16-bit parity check.

Historical details of the original wireless network are now rather difficult to come by. It is not clear whether Aloha used internet packets which were coming into use at the time, and simply framed them for the wireless hops, or whether it used its own frame format and routing to the internet was done by protocol conversion at the Oahu station. Also it is reasonably well documented that acknowledgements for incoming packets were by retransmitting the packets back to the outlying stations, but since the system was not fully symmetrical, were packets originating from Oahu also echoed back (unless they were acknowledgement packets)? Lastly, Metcalfe has commented that although the original Aloha and the later slotted Aloha had limited channel efficiency, it was possible to improve on this considerably once the system was deployed. There are few details of this, but they may have moved to something a bit closer to a reservation protocol.


  1. Tanenbaum, Andrew S. (2003). Computer Networks. Prentice Hall PTR. 
  • Kuo, Franklin F (1995). "The ALOHA system". ACM Computer Communication Review: 25. 
  • Abramson, Norman (December, 2009). "The AlohaNet -- Surfing for Wireless Data". IEEE Communications Magazine 47 (12): 21 - 25. 

External links

Template:Channel access methodscs:ALOHAnet de:ALOHAnet es:ALOHAnet eu:ALOHA artekatua fr:ALOHAnet ko:알로하 프로토콜 it:ALOHAnet he:פרוטוקול ALOHA hu:Aloha protokoll ja:ALOHAnet pl:ALOHAnet pt:ALOHAnet sr:Алоханет sv:ALOHAnet zh:ALOHA协议

Personal tools

Served in 0.325 secs.