Unreal-to- Real

Unreal-to- Real

Sunday, May 26, 2013

Fiber Network - System Power Budgeting

Fiber Network - System Power Budgeting

Attenuation of both multimode and single-mode fibre is generally linear with distance. The amount of signal loss due to cable attenuation is just the attenuation per kilometer (at the signal wavelength) multiplied by the distance. To determine the maximum distance you can send a signal (leaving out the effects of dispersion), all you need to do is to add up all the sources of attenuation along the way and then compare it with the “link budget”. The link budget is the difference between the transmitter power and the sensitivity of the receiver.

Thus, if you have a transmitter of power -10 dBm and a receiver that requires a signal of power -20 dBm (minimum) then you have 10 dB of link budget. So you might allow:

·         10 connectors at .3 dB per connector = 3 dB
·         2 km of cable at 2 dB per km (MM GI fibre at 1300 nm) = 4 dB
·         Contingency of (say) 2 dB for deterioration due to ageing over the life of the system.

This leaves us with a total of 9 dB system loss. This is within our link budget and so we would expect such a system to have sufficient power. Dispersion is a different matter and may (or may not) provide a more restrictive limitation than the link budget.

The amount of power that we have to use up on the link and in connectors is determined by the characteristics of the components we select as transmitters and receivers.





Figure  shows the characteristics of some typical devices versus the transmission speed (in bits per second). A number of points are interesting here:

1. The power output of a laser doesn't vary much with modulation speed. Every laser has a limit to the maximum speed at which it can be modulated but up to that limit power output is relatively constant.

2. LEDs on the other hand produce less and less output as the modulation rate is increased. In the figure, the difference in fibre types only relates to the amount of power you can couple from an LED into the different types of fibre.

3. All receivers require higher power as the speed is increased. This is more a rule of physics than anything else. To reliably detect a bit a receiver needs a certain number of photons. This depends on the receiver itself but there is a theoretical limit of 21 photons per bit needed. Real receivers require around ten times this but it is a relatively fixed amount of optical power needed per bit. Therefore every time we double the modulation speed we need to also double the required power for a constant signal-to-noise ratio.

4. In addition to the point above there is another important problem when we get to seriously high speeds (above 10 Gbps). Within a pin detector at speeds above 10 Gbps the time taken for electrons to diffuse/drift across the i-layer (in the p-i-n structure) becomes a significant limitation. So if you want the device to respond faster you have to reduce the thickness of the i-layer. But reducing the thickness of this layer increases the capacitance between the p and n layers. So you have to reduce the detector surface area to compensate. Both of these actions reduce the volume (size) of the i-layer and hence they reduce
the probability that an incident photon will be absorbed and create an electron/hole pair. Thus the quantum efficiency of the detector is significantly reduced. Up to 10 Gbps we expect a (best case) quantum efficiency in pin detectors of around .8. At 20 Gbps this is reduced to .65, at 40 Gbps it reduces again to .33 and at 60 Gbps it becomes .25 or so. This reduction in quantum efficiency effect operates over and above the doubling of power you need when you double the line speed (as discussed in the previous point).

Current research is under way on the use of travelling wave principles in detectors to increase the quantum efficiency at these extreme speeds.

If you look in the figure for a given bit rate (vertical line) there will be a difference between the required receiver power and the available transmitter power. This difference is the amount we have available for losses in the fibre and connectors (and other optical devices such as splitters and circulators). It is also very important to allow some margin in the design for ageing of components (lasers produce less power as they age, detectors become less sensitive etc...).

Connector and Splice Loss Budgeting
The signal loss experienced at a connector or splice is not a fixed or predictable amount! We know roughly how much loss to expect from a particular connector type or from a particular type of splice in a fibre. The problem is the measured losses in actual splices and actual connectors vary considerably from each other. The good news is that actual measurements form (roughly) a “normal” statistical distribution about the mean (average).

Previous section shows “typical” losses that may be expected from different connector types. This table was complied from specifications obtained from connector manufacturers. However, in the practical world things are a bit more complex than this:

 1. For a connection using almost any modern single-mode connector where both connectors (halves of the connection) are from the same supplier you can expect a mean loss of .2 dB with a standard deviation of .15 dB.
2. If the manufacturers of the two connectors (halves of the connection) are different (the any-to-any case) then you can expect a loss of .35 dB (average) with a standard deviation of .25 dB. One type of single-mode connector may have an “average loss” of .2 dB but in practical situations this loss might vary from perhaps .1 dB to .8 dB (for the any-to-any case). In budgeting power for a link including multiple connectors we have a real problem deciding how much loss to allow for them.

If a hypothetical link has 10 connectors there is a statistical probability (albeit minuscule) that all will be high in loss (in this example .8 dB each) and so perhaps to be safe we need to allocate 8 dB for connector losses. But there is also a probability that each will be .1 dB and therefore we might need to allocate only 1 dB for the loss budget. In fact the probability of each of the above events is smaller than “minuscule” - it is somewhere between about 1 in 1010 and 1 in 10 depending on the exact way in which the extreme best and worst case (.1 dB and .8 dB) figures were arrived at in the first place. The same principle applies to fibre splices.

It is possible to get very sophisticated with statistics in predicting the amount of loss but things can be simplified significantly: If you know the average loss for a single connector and the standard deviation (.) of the connector loss for a particular situation then you can calculate these figures for any given combination.

1. The average (mean) of the total is just the average loss of a single connector multiplied by the number of connectors. Thus if we have 5 connectors in a link with an average loss of .35 dB per connector then the average loss of the total link will be 5 x .35 or 1.75 dB.

2. It is very important that the term “average loss” in this context be understood. If we fit 5 connectors (pairs) into a single link the total doesn't have an average loss - it has an actual loss. This actual loss will be quite a bit different from the average quoted above. If (hypothetically) we were to make a large number of links (say 100) each with five connectors then we could compute the mean (average) loss of a 5-connector link just by averaging over the 100 links. This mean would be very close indeed to 5 times the mean loss of a single connector. But we need to take care statistically of the fact that any real 5-connector link will be different from the mean. This is done by quoting not only the mean (for the combination of 5 connectors) but also a standard deviation from the mean.

3. The standard deviation (.)93 of the total is just the standard deviation of a single connector multiplied by the square root of the number of connectors involved. If we have 5 connectors each with a . of .25 then the . of the total is Õ5 times .25. That is, 2.235 x .25 which equals .559. For the above example (5 connectors) then we have a mean of 1.75 dB and a . of .559. Using a knowledge of the basic characteristics of a statistical “normal distribution” we can now calculate amount of loss to allow for the combination based on the probability we are prepared to accept of our being correct (or wrong!). . We know that 84.13% of the time the total will fall below one standard deviation (.) above the mean. So, If we allow a loss for the 5 connectors of the mean plus one standard deviation (1.75+.559 = 2.31 dB) then we will be safe 84.13% of the time. That is to say the real value will be less than our allowance 84.13% of the time.

. The total will fall below two standard deviations above the mean 97.72% of the time. So if we allow 2.868 dB for the connectors we will be safe 97.72% of the time.
. If we allow 2.32 times the standard deviation then we will be safe 99% of the time.
. In practice, many people like to use the “3-.” value where we can be confident of being safe 99.87% of the time.

 For this example the 3-. value would be 1.75+(3 x .559) which equals 3.427 dB.

We have taken a few shortcuts here. For example we have assumed that the distribution of connector losses is a statistically “normal” distribution. Also we have assumed that all connections are between connectors made by different manufacturers (the any-to-any case). Statistically some of them will really be like-to-like. But while we have taken shortcuts, the result is close enough.

1. If the number of cascaded splices is large (say more than 30) you can safely use the average loss and multiply it by the number of splices involved and ignore the variations. There is a statistical law here sometimes referred to as the “law of large numbers”. When you add up a large number of variable “things” (with the same characteristics) the variation in the sum gets smaller and smaller (in relation to the total) as the number gets larger.
2. With a very small number of splices (say two) you can allocate the worst case for each of them. The formula will arrive very close to this anyway.
3. For numbers in between use the calculation method described above. In long distance links it is common to regard splices as part of the fibre loss. So you might get raw SM fibre with a loss (at 1550 nm) of .21 dB/km. After cabling this will increase to perhaps .23 dB/km. For loss budget purposes you might
allocate .26 dB/km for installed cable. Cable is typically supplied in 2 km lengths so in a 100 km link there will be a minimum of 50 splices. Similarly, in the 1310 nm band, a typical cable attenuation might be .36 dB/km but it is typical to allocate .4 dB/km for fibre losses in new fibre used in this wavelength band.
The same piece of installed fibre cable would then be budgeted at .4 dB/km when used in the 1310 nm band and at .26 dB when used in the 1550 nm band.

Power Penalties

There are a number of phenomena that occur within an optical transmission system that can be compensated for by increasing the power budget. In each case the amount of additional power required to overcome the problem is termed the “power penalty”.
In all commodity communications products and in most pre-planned systems the effects of power penalties are already included by means of adjustment of the receiver sensitivity. The user systems engineer can usually ignore them quite safely. Nevertheless it is important to understand what they are and get some idea of the magnitude of the penalty. The three most important issues here for digital systems are:

1. System noise
2. Effect of dispersion and
3. Extinction ratio

Signal-to-Noise Ratio (SNR)
The quality of any received signal in any communication system is largely determined by the ratio of the signal power to the noise power - the SNR. Obviously, the SNR is a function of both the amount of noise and the signal power. You can always improve the SNR by increasing the signal power (if you can do it without also increasing the noise).

When noise is present the amount of increase in signal power necessary to compensate for the noise and produce the same SNR at the output can be expressed as an amount of power increase in decibels. This is the power penalty due to noise. In simple systems most of the noise comes from within the receiver itself and so is usually compensated for by an adjustment of the receiver sensitivity specification. In complex systems with EDFAs, ASE noise becomes important and to compensate we indulge in power level planning throughout the system.

Inter-Symbol Interference (ISI)
Dispersion causes bits (really line states or bauds) to merge into one another on the link. When this becomes severe it will prevent successful link operation but at lower levels of severity, dispersion adds noise to the signal.
We can compensate for this by increasing the signal power level and thus for certain levels of dispersion we can nominate a system power budget (allowance) to compensate.

Extinction Ratio
If a zero bit is represented by a finite power level rather than a true complete absence of power then the difference between the power level of a 1-bit and that of a 0-bit is narrowed. The power level of the 0-bit becomes the noise floor of every 1-bit. The receiver decision point has to be higher and therefore there is an increased probability of error.

This can be compensated for by an increase in available power level at the receiver. An extinction ratio of 10 dB incurs a power penalty (in either a pin-diode receiver or an APD) of about 1 dB over what it would have been with a truly zero value for a 0-bit. An extinction ratio of 3 dB causes a power penalty of 5 dB in a pin-diode receiver and 7 dB in an APD.

Bit Error Rates (BER)

In a digital communication system the measure of system “goodness” is the bit error rate or BER. This is the number of error bits received as a proportion of the number of good bits. It is usually expressed just as a single number such as 10-6 which means one in a million. It must be realised that errors are normal events in communications systems - there is always the probability of an error (however small).

When an optical communications system is planned the BER is a key objective of the system design and measure of success. It is determined by the link speed, its power, the distance, the amount of noise etc.

The question of what is an “adequate” BER in a particular situation or what is a good one is purely a judgement call on the part of the people who set the system objectives. However when considering BERs some points should be borne in mind. . When modern networking systems (such as ATM and Sonet/SDH) were designed it was assumed that they would operate over very low error rate optical links. Errors have a disruptive effect on both of these protocols. . In the early days, computer networking error rates of 10-6 and 10-5 on slow speed copper connections were normal and higher level systems were designed to  recover and give acceptable throughput. Many modern networking systems will fail entirely if operated over links this bad.

. In current networking technologies an error at the lowest layer has its effects multiply as you proceed up the protocol stack. A single bit error at the physical layer could (in the extreme) cause loss of frame synchronisation in the SDH layer which might cause the loss of perhaps 30 frames. The loss of 30 SDH
frames might mean the loss of 100 ATM cells and the loss of these might cause the re-transmission of up to 50 cells for every one lost. So the network could well end up re-transmitting 3000 cells to recover from a single bit error! (This is an extreme and highly unlikely example but the principle is sound.)

. On many public network optical networks today error rates of 10-14 are consistently achieved and so user expectation is that errors will be very rare events indeed.
. Public network operators seem to consider the minimum acceptable error rate to be around 10-12.
. In many research reports you find optical network error rates of 10-9 quoted. Many people feel that in the context of their use as lowest-layer network within a stack of networks that this figure is just not good enough. This is a judgement call - but...

. The faster the link the lower we need the error rate to be! But the harder that low error rate becomes to deliver. . In many standards (such as the ATM recommendations) the expected error rate performance of links over which the system will be run are specified in the standard.

No comments:

Post a Comment