Fiber Cladding
The cladding is the layer of dielectric material that immediately surrounds the core of an optical fiber and completes the composite structure that is fundamental to the fiber’s ability to guide light. The cladding of telecommunications grade optical fiber is also made from silica glass, and is as critical in achieving the desired optical performance properties as the core itself.

For optical fiber to work, the core must have a higher index of refraction than the cladding or the light will refract out of the fiber and be lost. Initially multiple cladding diameters were available, but the industry swiftly arrived at a consensus standard cladding diameter of 125 μm, because it was recognized that a common size was needed for intermateability.

A cladding diameter of 125 μm is still the most common, although other fiber core and cladding size combinations exist for other applications. Because of their similar physical properties it is possible, and in fact highly desirable, to manufacture the core and cladding as a single piece of glass which cannot be physically separated into the two separate components.

It is the refractive index characteristic of the composite core-clad structure that guide the light as it travels down the fiber. The specific materials, design, and construction of these types of optical fibers make them ideally suited for use in transmitting large amounts of data over the considerable distances seen in today’s modern telecommunications systems.

Fiber Coating
The third section of an optical fiber is the outer protective coating. The typical diameter of an uncolored coated fiber is 245 μm, but, as with the core and cladding, other sizes are available for certain applications.

Coloring fibers for identification increases the final diameter to around 255 μm. The protective coating typically consists of two layers of an ultraviolet (UV) light cured acrylate that is applied during the fiber draw process, by the fiber manufacturer.

The inner coating layer is softer to cushion the fiber from stresses that could degrade its performance, while the outer layer is made much harder to improve the fiber’s mechanical robustness. This composite coating provides the primary line of physical and environmental protection for the fiber.

It protects the fiber surface to preserve the inherent strength of the glass, protects the fiber from bending effects, and simplifies fiber handling. The colored ink layer has properties similar to the outer coating, and is thin enough that its presence does not significantly affect the fiber’s mechanical or optical properties.



Optical fiber provides many fundamental advantages over alternative transmission technologies for telecommunications applications. The comparatively limited performance of copper conductor based systems forces the use of expensive signal conditioning and regeneration equipment (e.g., amplifiers and repeaters) at much closer intervals than for fiber optic systems.

A single line of a voice grade copper system (i.e., 56 kbs) longer than a couple of kilometers requires the use of in-line signal processing for satisfactory performance, and even then is subject to the electromagnetic effects of interfering radio frequency sources such as radio, television, cell phone, and air traffic control broadcasts.

As information throughput requirements increase with the demands of more data-intensive applications at the end-user premises, the spacing between the copper-based repeater points must decrease in order to maintain the same aggregate data rate capability over a given length.

Contrast that to all-optical systems in which it is not unusual to transmit 10 gigabits per second data rates over hundreds of kilometers without the need for active signal processing between the transmitter and receiver.

Additionally, as it becomes necessary to increase the data transmission capacity or coverage area of a telecomunications system, the diameter and weight of cables for copper conductor systems increase much more rapidly than for optical fiber systems, resulting in a proportionally higher increase in materials, installation, and maintenance related costs.

The small size of optical cables, coupled with readily available components that make efficient use of the optical fiber’s transmission capabilities, enable them to be manufactured and installed in much longer lengths than copper cables. The virtually unlimited capacity of optical fiber also alleviates fears of incurring significant long-term costs associated with frequent system upgrades, extensions, or over builds.

The availability of long lengths of individual lightweight fiber optic cables, up to 10 km or more, also make the installation of fiber optic systems much safer, easier, and less expensive, than comparable copper-based systems.

Because of their design, fiber optic cables can generally be installed with the same equipment historically used to install twisted pair and coaxial cables, allowing some consideration for the smaller size and lower standard tensile strength properties of fiber optic cable.

More importantly, fiber optic cable design has progressed to the point where it serves as an enabling technology for newer installation methods that are faster, less expensive, and less intrusive to the environment than traditional installation means. Optical cables can be installed in duct system spans of 4000 meters (m) or more depending on the condition, construction, and layout of the duct system, and the details of the installation technique(s) used.

Even longer lengths of fiber optic cable can be installed aerially, trenched, or buried in the ground and ocean floor. These extra-long lengths of cable reduce the number of splice points, thereby making the overall installation of optical fiber based telecommunications systems more efficient. The small size of fiber optic cable also saves on valuable conduit space in buried duct applications.

This feature becomes even more prevalent when considering some emerging cable types that are specifically designed for use with air-blown or air-assist installation techniques into miniature ducts that are only about one centimeter in diameter.

Another advantage of optical fiber and fiber optic cable is the inherent flexibility in design options, allowing for the development of innovative products for specific applications. Since optical fiber is a man-made composite glass structure, it can be custom designed to meet optimal cost/performance targets in any number of specific applications.

As it does not conduct electrical current and is not affected by electromagnetic interference, fiber optic cable can be made all dielectric, making it the ultimate in electromagnetically compatible transmission media.

This eliminates such issues as dangerous ground loops, the effects of voltage spikes from the cycling of heavy electrical equipment, and requirements for separate conduits for metallic conductors.
It also improves the security of controlled transmission rooms as it is much more difficult to tap a fiber optic line, and much easier to provide security for fiber optic cable.



The effort required for the design of an integrated circuit depends on the complexity of the circuit. The requirement may range from several days effort for a single designer to several months work for a team of designers.

Custom design of complex integrated circuits is the most demanding. By contrast, semicustom design of LSI and VLSI that utilize preexisting designs, such as standard cells and gate arrays, requires less design effort.

IC design is performed at many different levels and Fig. 8.9 is a non unique depiction of these levels.

Level 1 presents the design in terms of subsystems (standard cells, gate arrays, custom subcircuits, etc.) and their interconnections. Design of the system layout begins with the floor plan of level 3.

It does not involve the layout of individual transistors and devices, but is concerned with the geometric arrangement and interconnection of the subsystems.

Level 4 involves the circuit design of the subsystems.

Levels 2 and 5 involve system and sub circuit simulations, respectively, which may lead to modifications in levels 1 and/or 4.

Discussion here will focus primarily on the system design of level 1 and the subsystem circuit design of level 4. Lumped under the fabrication process of level 7 are many tasks, such as mask generation, process simulation, wafer fabrication, testing, etc.

Broadly speaking, floor plan generation is a part of layout. For large ICs, layout design is often relevant to system and circuit design.



Fusible link PROM has now largely been superseded by ultraviolet erasable programmable read-only memory (UVEPROM) and electrically erasable programmable read-only memory (EEPROM). While fusible link devices are effectively permanent, UVEPROM and EEPROM have expected data retention times of 10 to 40 years at room temperature; this has implications for system reliability so they may not be suitable for some systems like those that are exposed to very high temperatures or radiation, such as satellites.

UVPROM and EEPROM use floating gate FETs as the programmable elements. These operate like a normal FET except the gate structure contains an extra isolated conducting layer, the floating gate, which forms a capacitor that can be charged by application of a much higher voltage than used for normal operation.

The effect of charging the capacitor is to change the threshold voltage of the FET. In the uncharged state, the floating gate prevents the FET from turning on when the row line is pulled high, and does not pull the column line low. Once the floating gate is charged the FET can be turned on, pulling the column line low. FLASH memory is based on similar physical effects but the logical architecture is different.

The charge will remain on the capacitor until it leaks away over time, taking 10 to 40 years at room temperature; this leakage can be accelerated by exposure to ultraviolet (UV) light or a high voltage. UVEPROMs are designed to be erased by exposure to short-wavelength UV radiation for about 20 minutes.

It should be noted that the device will be erased by leaving it in direct sunlight for a few days, or under bright fluorescent light for a few months to a year. The package has a quartz window (Figure 10.2) to allow the light in, and this should be covered with a lightproof label if the device is likely to be exposed.

UVEPROMs are available without the window in the package, and these devices are referred to as one time programmable (OTP) devices. The silicon die is identical to that used in the windowed part but the cost of the package is lower.

Microcontrollers are often provided in UVEPROM for development work and in OTP for production. EEPROM do not need the window because they have additional circuitry to erase/re-write the bits.

Fusible link memories are permanent and they can not be reprogrammed, although it is sometimes possible to design a program arrangement so that sections of program can be bypassed by blowing more fuses. The reason that the no-operation (NOP) instruction of some older microprocessors is FFH is to allow changes to programmable devices that cannot be erased.

An instruction can be changed to NOP by blowing all the unblown fuses of a byte. Modern microcontrollers often use 00H as the NOP instruction for the same reason, since OTP versions of UVEPROMs allow program code to be deleted by programming all the bits of a byte.

Small-memory devices of up to about 256 bytes could be made in a similar way to the 8-byte example shown in Figure 10.1, however, as memory devices get larger the address decoding overhead becomes an issue. Square arrays of memory cells are more efficient in their use of silicon.

Using 8 square arrays, one for each bit of the byte, reduces the decoding requirement from 4096 row drivers to 512 row drivers and 512 column lines, making the whole device smaller and nearer a square in shape which makes layout of the row and column interconnect easier.



Passive components are those that need no power supply for their operation and whose action will dissipate power, though in some cases the amount of dissipation is negligible. No purely passive component can have an output that supplies more power than is available at the input.

Active components, by contrast, make use of a power supply, usually DC, so that the signal power output of an active component can be higher than the signal power at the input. Typical passive components are resistors, capacitors and inductors.

Familiar active components are transistors and ICs. All components, active or passive, require to be connected to a circuit, and the two main forms of connection, mechanical and electrical, used in modern electronic circuits are the traditional wire leads, threaded through holes in a printed circuit board and the more modern surface mounting devices (SMDs) that are soldered directly on to the tracks of a board. Both passive and active components can use either type of connection and mounting.

Components for surface mounting use flat tabs in place of wire leads, and because these tabs can be short the inductance of the leads is greatly reduced. The tabs are soldered directly to pads formed onto the board, so that there are always tracks on the component side of the board as well as on the opposite side.

Most SMD boards are two sided, so that tracks and components are also placed on the other side of the board. Multilayer boards are also commonly used, particularly for mobile phones (4 to 6 layers) and computer motherboards.

The use of SMDs results in manufacturers being able to provide components that are physically much smaller, but with connections that dissipate heat more readily, are mechanically stronger and have lower electrical resistance and lower self-inductance. Some components can be made so small that it is impossible to mark a value or a code number onto them.

This presents no problems for automated assembly, since the tape or reel need only be inserted into the correct position in the assembly machine, but considerable care needs to be taken when replacing such components manually, and they should be kept in their packing until they are soldered into place.

Machine assembly of SMD components is followed by automatic soldering processes, which nowadays usually involve the use of solder paste or cream (which also retains components in place until they are soldered) and heating by blowing hot nitrogen gas over the board. Packaging of SMD components is nowadays normally on tapes or in reels.



Mobile phone chargers available in the market are quite expensive. The circuit presented here comes as a low-cost alternative to charge mobile telephones/battery packs with a rating of 7.2 volts, such as Nokia 6110/6150.

The 220-240V AC mains supply is down converted to 9V AC by transformer X1. The transformer output is rectified by diodes D1 through D4 wired in bridge SANI THEO configuration and the positive DC supply is directly connected to the charger’s output contact, while the negative terminal is connected through current limiting resistor R2.

LED2 works as a power indicator with resistor R1 serving as the current limiter and LED3 indicates the charging status. During the charging period, about 3 volts drop occurs across resistor R2, which turns on LED3 through resistor R3.

An external DC supply source (for instance, from a vehicle battery) can also be used to energise the charger, where resistor R4, after polarity protection diode D5, limits the input current to a safe value.

The 3-terminal positive voltage regulator LM7806 (IC1) provides a constant voltage output of 7.8V DC since LED1 connected between the common terminal (pin 2) and ground rail of IC1 raises the output voltage to 7.8V DC. LED1 also serves as a power indicator for the external DC supply.

After constructing the circuit on a veroboard, enclose it in a suitable cabinet. A small heat sink is recommended for IC1.



The circuit is capable of charging 12-V battery at up to six ampere rate. Other voltages and crrents, from 6 to 600 V and up to 300 A can be accomodated by suitable component selection.

When the battery voltage reaches its fully charged level, the charging SCR shuts off, and a trickle charge, as determined by the value of R4, continues to flow.



When light encounters a boundary between two transmissive mediums with differing index of refraction value, it may be reflected back into the first medium at the interface boundary, bent at a different trajectory (i.e., refracted) as it passes into the second medium, or some combination of the two (see Figure 9.2).

The actual result depends on the angle the light strikes the interface (angle of incidence) and the wavelength dependent index of refraction values for the two materials. As the light passes from one medium to another, the refracted light obeys Snell’s law (see Equation 9.2).

By convention, the angles used in calculating the light paths are measured from a line drawn normal to the axis of the core-clad boundary or fiber center line. n1sin θ1 = n2sin θ2 Based on this relationship, as the angle of incidence (θ1) increases, the angle of refraction (θ2) approaches 90◦.

The angle θ1 which results in θ2 =90° is called the critical angle. For angles of incidence greater than the critical angle, the light is essentially reflected entirely back into the first medium at an angle equal to the angle of incidence.

This condition is called “total internal reflection,” and it is the basic principle by which optical fibers work. The angle of the reflected light is called the angle of reflection.

FIGURE 9.2 Refraction of light at boundary between different mediums. (Courtesy of Corning Cable Systems LLC and corning® Inc.)



In recent years the cathode-ray tube (crt) has become familiar to millions of persons as the picture tube in a television set. The crt in television is designed to reproduce an undistorted picture on a screen.

The picture is developed from a series of pulses and varying voltages applied to the elements of the tube. Fundamentally, the crt consists of an electron "gun," a phosphorescent screen, and deflecting devices to control the movement of the electron beam "shot" from the gun (see Fig. 12.47).

Cathode Ray Tube Diagram

How Cathode Ray Tube Works?
As in any thermo-emitting tube, the heated cathode supplies the electron emission, and these electrons are accelerated toward the screen by the positive charges on the anodes. The intensity of the electron beam is regulated by means of the control grid charge.

After the electron beam is accelerated and focused by the anodes, it is controlIed in direction by the deflection plates. When the electrons strike the phosphorcoated screen, they cause a bright spot to appear.

If an alternating voltage is applied to the vertical deflection plates, the spot will move up and down and form a straight line. In like manner, if an alternating voltage is applied to the horizontal deflection plates, a horizontal straight line will appear on the screen.

In practice the horizontal deflection of the electron beam is used to provide a time base. The output of a sawtooth oscillator is applied to the horizontal deflection plates so that the electron beam will sweep at a steady rate from left to right and at the end of the sweep will return instantly to the left side and start another sweep.

As the voltage rises, the electron beam moves to the right; but when the voltage drops, the beam returns immediately to the left side of the screen.

If we set the horizontal timing for the crt to 60 Hz and apply a 60-Hz alternating voltage to the vertical deflection plates, a stationary sine wave will appear on the screen.



The doppler navigation system is so named because it utilizes the doppler shift principle. The doppler shift is the difference in frequency which occurs between a radar signal emitted from an aircraft radar antenna and the signal returned to the aircraft.

If the signal is sent forward from an aircraft in flight, the returning signal will be at a higher frequency than the signal emitted. The difference in the frequencies makes it possible to measure speed and direction of movement of the aircraft, thus providing information which can be computed to give the exact position of the aircraft at all times with respect to a particular reference point and the selected course.

In the doppler navigation system, flight information is obtained by sending four radar beams of continuous wave, 8,800-MHz energy from the aircraft to the ground and measuring the changes in frequencies of the energy returned to the aircraft.

The change in frequency for any one beam signal is proportional to the speed of the aircraft in the direction of the beam. The radar beams are pointed forward and down at an angle of approximately 45 deg to the right and left of the center of the aircraft and rearward and down at a similar angle.

When the airplane is flying with no drift, the forward signals will be equal. The rearward signals will be equal to the forward signals, but opposite in value.

The difference between the frequencies of the forward and rearward signals will be proportional to the ground speed, hence this difference is used to compute the ground speed and display the value on the doppler indicator.

If the airplane drifts, there will be differences in the frequencies between the right and left beam signals, and these differences are translated into drift angle and displayed on the doppler indicator.

Figure 20.20 is a drawing showing how the radar beams are aimed with respect to the aircraft. The doppler indicator is shown in Fig. 20.21.

The advantage of a doppler system is that it is completely contained in the aircraft and requires no external signals. At the start of a flight, the course or courses to be flown are programmed into the system. Therefore, continuous information regarding the position of the aircraft will be displayed on the doppler indicator and the computer controller.

Related Posts Plugin for WordPress, Blogger...