[ Pobierz całość w formacie PDF ] .The routing algorithm adapts to the changed topology, and routes are recalculated.Althoughpackets in transit are most likely dropped while the routing algorithm is adapting, the sourcerouter fails to receive acknowledgments for those packets and retransmits them.7.4 Asynchronous Transfer ModeATM stands for Asynchronous Transfer Mode (aren't you glad you asked?).Why is it calledthat? The phone companies were accustomed to protocols that had data for each call occurat regular intervals in the data stream.If you stick enough header information on the data sothat it is self-describing, the data can be transmitted as it exists rather than according to afixed amount every unit of time.Although it takes more overhead to make the data self-describing, it's more flexible.Most applications do not operate at a constant bit rate.ATMallows data to be bursty and to be sent when needed.So that's why they call it asynchronous.Basically, ATM reinvented packet switching.ATM people call the boxes that move cells switches, although they really are not conceptuallydifferent from routers.They need a routing algorithm.They make forwarding decisions.Themain difference between something that moves ATM cells and something that moves, say, IPpackets is that in ATM you set up the path first, assigning a virtual circuit number.Then theswitches make forwarding decisions based on the virtual circuit number rather than thedestination address.ATM is a connection-oriented datagram service.There are other things to know about ATM, such as LANE (LAN emulation), MPOA(multiprotocol over ATM), classical IP over ATM, and the addresses that are used.Thesetopics are discussed in Section 11.3.3.7.4.1 Cell SizePerhaps the weirdest thing about ATM is that packets (which it calls cells) carry only 48 bytesof data, with a 5-byte header.The number 48 made everyone unhappy.The French PTTclaimed that it needed cells to be no more than 32 bytes, or else it would need echosuppressors.(Delay is caused by waiting for there to be enough data to fill a cell, followed bythe propagation time across the country.) The data people wanted large cells, at least 128bytes.They had two main arguments.One was that 5 bytes of overhead for small cells wasexcessive; for example, for a 48-byte payload there would be more than 10% overhead.Theother argument was that switches couldn't make a decision fast enough, so with small cellsizes ATM networks would be limited in speed because of the switching speed rather than thetransmission speed.If the speed of the switches really was the thing that limited the speed ofthe ATM network, you could get twice as much throughput by making the cells twice as large.The claim was that cells needed to be somewhere between 64 and 128 bytes long so thatswitching speed would not be the limiting factor.So the committee compromised on 48, a number large enough that it requires echosuppressors and small enough to make the data people as unhappy as the French PTT.7.4.2 Virtual Circuits and Virtual PathsATM is conceptually very similar to X.25.Virtual circuits are created as in Figure 7.1,creating the call-mapping database in the switches that specifies the port onto which toforward a cell and what the outgoing connection identifier should be.The connection identifier in the ATM cell header has two complexities." It's hierarchical and divided into two subfields: VPI (virtual path identifier) and VCI(virtual circuit identifier).The VCI is 16 bits.The VPI is 12 bits." It looks different between an endnode and a switch than between two switches.Between the endnode and the switch there are 4 bits reserved for a field calledgeneric flow control, which the committee thought might be useful someday.Thegeneric flow control uses 4 bits of the VPI field, so to endnodes the VPI field is 8 bitslong.Except for making the spec a little more complicated, reserving those 4 bitsdoes no harm because the endnode doesn't need any more bits than the VCI field.So what's a VPI? There might be a very high speed backbone carrying many millions of calls.This split between VPI and VCI saves the routers in the backbone from requiring that theircall-mapping database keep track of millions of individual calls.Instead, the backbone routersuse only the VPI portion of the call identifier.Thousands of VCs might be going on the sameVP, but the switches inside can treat all the VCs for that VP as a unit.Outside the backbone,the switches treat the entire field (VPI and VCI) as one combined, nonhierarchical field.Theterm VP-switching refers to switches that are looking at only the VPI portion.VC-switchingrefers to switches that are looking at the entire field.A way of comparing VPs and VCs is to think of each connection across the cloud as a logicalport (see Figure 7.12).To forward onto the logical port, a physical port and the correct VPImust be selected.Figure 7.12.VPs as logical portsIn the "normal" VC-switching described in Section 7.1, if S1 were to receive a call setup onport b with CI 17 for destination D, S1 would decide which port it should use to forward to Dand would pick a CI unused on the chosen outgoing port.In the case in Figure 7.12, it wouldbe "port" e.The only complication is that forwarding onto "port e" involves selection of aphysical port within the cloud, and a VPI.So there are three cases." Switches within the cloud do normal VP-switching, with the CI being the 12-bit VPI." Switches outside the cloud do normal VC-switching, with the CI being the 28-bitconcatenated VPI/VCI field
[ Pobierz całość w formacie PDF ] zanotowane.pldoc.pisz.plpdf.pisz.plmikr.xlx.pl
|