CLOUD COMPUTING VENDOR LANDSCAPE BASIC INFORMATION AND TUTORIALS

0 comments

At the beginning of the new millennium there was not yet such a thing as a cloud computing vendor, though (as we have seen) teams were already hard at work on several significant efforts . . . and several of these eventually blossomed into key cloud computing vendors.

In fact, only a few years after a modest beginning for the nascent industry, there is a vibrant vendor ecosystem, with everything from relatively established players to the latest, most hopeful startups, and much in between. Hardly a month passes without numerous, significant  product announcements, nor a quarter without new vendors and open source projects.

Cloud computing is clearly an area of rapid evolution. As a result, in order to ensure the most useful (current) information this brief appendix contains information that is least likely to change rapidly—overview information for the major categories, including examples of some of the vendors in each category.

Comprehensive, current listings of companies and products, including industry trends and recent developments are available on the web site. The major categories include three that correspond to the major layers of the cloud technology stack, and two for those providing expertise in one form or another. Each category includes vendors focused on public, private, and hybrid cloud offerings; those focused on commercial as well as government markets; startups and the established; open source, open distribution, and traditional distribution models; and in many cases, all of the above.

Of course certain vendors have offerings in more than one category; a handful intend to cover each category, though that will likely be difficult to achieve and maintain. In any case, here are the major categories, along with a few notes about the history that shaped each category.

Infrastructure as a Service (IaaS)
Vendors in the Infrastructure as a Service (IaaS) category primarily fall into two broad groups: those that provide an existing IaaS and those that provide technology to enable IaaS. Vendors that provide an existing IaaS generally come from cloud technology providers (e.g., Amazon), managed services or hosting providers (e.g., Rackspace, Savvis, etc.), and integrated vendors such as HP, IBM, and Dell.

The technology providers include those who provide software stacks to manage physical or virtualized infrastructure (such as VMWare) as well as those who provide hardware (of varying degrees of commodity) that is intended for easy stacking, replacement, and so forth (all of the major hardware providers, several startups, and certain fresh entrants from nontraditional vendors, such as Cisco).

This is a category that is likely to see significant innovation– in particular, as the trend towards commoditization of the infrastructure matures, then very-high volume providers of commodity infrastructure are likely to dominate, both amongst the ready to consume IaaS and the technology providers.

CLOUD COMPUTING PLANNING STAGE TACTICS BASIC INFORMATION

0 comments

At the phase of cloud planning, it is necessary to make a detailed investigation on customer position and to analyze the problems and risks in cloud application both at present and in the future. After that, concrete approaches and plans can be drawn to ensure that customers can use cloud computing successfully to reach their business goals.

This phase includes some practicable planning steps in multiple orders listed as follows,

(1) Business Architecture Development
While capturing the organizational structures of enterprises, the business models also get the information on business process support.

As various business processes and relative networks in enterprise architecture are being set down one after another, gains and losses brought by relative paths in the business development process will also come into people’s understanding.

We categorize these to business interests and possible risks brought by cloud computing application from a business perspective.

(2) IT Architecture Development
It is necessary to identify the major applications needed to support enterprises business processes and the key technologies needed to support enterprise applications and data systems. Besides, cloud computing maturity models should be introduced and the analysis of technological reference models should be made, so as to provide help, advices and strategy guide for the design and realization of cloud computing mode in the enterprise architecture.

(3) Requirements on Quality of Service Development
Compared with other computing modes, the most distinguishing feature of cloud computing mode is that the requirements on quality of service (also called non-functional needs) should be rigorously defined beforehand, for example, the performance, reliability, security, disaster recovery, etc.

This requirement is a key factor in deciding whether a cloud computing mode application is successful or not and whether the business goal is reached; it is also an important  standard in measuring the quality of cloud computing service or the competence in establishing a cloud computing center.

(4) Transformation Plan Development
It is necessary to formulate all kinds of plans needed in the transformation from current business systems to the cloud computing modes, including the general steps, scheduling, quality guarantee, etc. Usually, an infrastructure service cloud cover different items such as infrastructure consolidation plan report, operation and maintenance management system plan, management process plan, application system transformation plan, etc.

CLOUD COMPUTING SECURITY BASIC INFORMATION

0 comments

One of the biggest user concerns about Cloud Computing is its security, as naturally with any emerging Internet technology. In the enterprise data centers and Internet Data Centers (IDC), service providers offer racks and networks only, and the remaining devices have to be prepared by users themselves, including servers, firewalls, software, storage devices etc.

While a complex task for the end user, he does have a clear overview of the architecture and the system, thus placing the design of data security under his control. Some users use physical isolation (such as iron cages) to protect their servers. Under cloud computing, the backend resource and management architecture of the service is invisible for users (and thus the word

“Cloud” to describe an entity far removed from our physical reach). Without physical control and access, the users would naturally question the security of the system.

A comparable analogy to data security in a Cloud is in financial institutions where a customer deposits his cash bills into an account with a bank and thus no longer have a physical asset in his possession. He will rely on the technology and financial integrity of the bank to protect his now virtual asset.

Similarly we’ll expect to see a progression in the acceptance of placing data in physical locations out of our reach but with a trusted provider. To establish that trust with the end users of Cloud, the architects of Cloud computing solutions do indeed designed rationally to protect data security among end users, and between end users and service providers.

From the point of view of the technology, the security of user data can be reflected in the following rules of implementation:

1. The privacy of user storage data. User storage data cannot be viewed or changed by other people (including the operator).

2. The user data privacy at runtime. User data cannot be viewed or changed by other people at runtime (loaded to system memory).

3. The privacy when transferring user data through network. It includes the security of transferring data in cloud computing center intranet and internet. It cannot be viewed or changed by other people.

4. Authentication and authorization needed for users to access their data. Users can access their data through the right way and can authorize other users to access.

RFID PROTOCOL TERMS AND CONCEPTS

0 comments

Technical jargon develops around any new technology, and RFID is no exception. Some of these terms are quite useful, serving as a convenient way to communicate concepts needed to describe other concepts that will appear in the pages that follow. These terms include:

Singulation
This term describes a procedure for reducing a group of things to a stream of things that can be handled one at a time. For example, a subway turnstile is a device for singulating a group of people into a stream of individuals so that the system may count them or ask them for access tokens.

This same singulation is necessary when communicating with RFID tags, because if there is no mechanism to enable the tags to reply separately, many tags will respond to a reader at once and may disrupt communications.

Singulation also implies that the reader learns the individual IDs of each tag, thus enabling inventories. Inventories of groups of tags are just singulation that is repeated until no unknown tags respond.

Anti-collision
This term describes the set of procedures that prevent tags from interrupting each other and talking out of turn. Whereas singulation is about identifying individual tags, anti-collision is about both regulating the timing of responses and finding ways of randomizing those responses so that a reader can understand each tag amidst the plethora of responses.

Identity
An identity is a name, number, or address that uniquely refers to a thing or place. "Malaclypse the Elder" is an identity referring to a particular person. "221b Baker Street London NW1 6XE, Great Britain" is an identity referring to a particular place, just as "urn:epc:id:sgtin:00012345.054322.4208" is an identity referring to a particular widget

ADVANTAGES OF RFID OVER OTHER TECHNOLOGIES BASIC INFORMATION

0 comments


There are many different ways to identify objects, animals, and people. Why use RFID? People have been counting inventories and tracking shipments since the Sumerians invented the lost package. Even some of the earliest uses of writing grew from the need to identify shipments and define contracts for goods shipped between two persons who might never meet.[*] Written tags and name badges work fine for identifying a few items or a few people, but to identify and direct hundreds of packages an hour, some automation is required.

The bar code is probably the most familiar computer-readable tag, but the light used to scan a laser over a bar code imposes some limitations. Most importantly, it requires a direct "line of sight," so the item has to be right side up and facing in the right direction, with nothing blocking the beam between the laser and the bar code.

Most other forms of ID, such as magnetic strips on credit cards, also must line up correctly with the card reader or be inserted into the card reader in a particular way. Whether you are tracking boxes on a conveyor or children on a ski trip, lining things up costs time.

Biometrics can work for identifying people, but optical and fingerprint recognition each require careful alignment, similar to magnetic strips. Facial capillary scans require you to at least face the camera, and even voice recognition works better if you aren't calling your passphrase over your shoulder.

RFID tags provide a mechanism for identifying an item at a distance, with much less sensitivity to the orientation of the item and reader. A reader can "see" through the item to the tag even if the tag is facing away from the reader.

RFID has additional qualities that make it better suited than other technologies (such as bar codes or magnetic strips) for creating the predicted "Internet of Things."[*] One cannot, for instance, easily add information to a bar code after it is printed, whereas some types of RFID tags can be written and rewritten many times. Also, because RFID eliminates the need to align objects for tracking, it is less obtrusive. It "just works" behind the scenes, enabling data about the relationships between objects, location, and time to quietly aggregate without overt intervention by the user or operator.

[*] This term was originally attributed to the Auto-ID Center. We will discuss both this term and the Auto-ID Center in more detail later in this book.

To summarize, some of the benefits of RFID include the following:

Alignment is not necessary

A scan does not require line of sight. This can save time in processing that would otherwise be spent lining up items.

High inventory speeds

Multiple items can be scanned at the same time. As a result, the time taken to count items drops substantially.

Variety of form factors

RFID tags range in size from blast-proof tags the size of lunch boxes to tiny passive tags smaller than a grain of rice. These different form factors allow RFID technologies to be used in a wide variety of environments.

Item-level tracking

Rewritability

Some types of tags can be written and rewritten many times. In the case of a reusable container, this can be a big advantage. For an item on a store shelf, however, this type of tag might be a security liability, so write-once tags are also available.

ANTENNA BANDWIDTH BASIC INFORMATION AND TUTORIALS

0 comments

Antennas can find use in systems that require narrow or large bandwidths depending on the intended application. Bandwidth is a measure of the frequency range over which a parameter, such as impedance, remains within a given tolerance.

Dipoles, for example, by their nature are very narrow band. For narrow-band antennas, the percent bandwidth can be written as

( fU − fL ) × 100/ fc

where
fL = lowest useable frequency
fU = highest useable frequency
fC = center design frequency
In the case of a broadband antenna it is more convenient to express bandwidth as
fU
fL

One can arbitrarily define an antenna to be broadband if the impedance, for instance, does not change significantly over one octave ( fU / fL = 2).

The design of a broadband antenna relies in part on the concept of a frequency-independent antenna. This is an idealized concept, but understanding of the theory can lead to practical applications. Broadband antennas are of the helical, biconical, spiral, and log-periodic types.

Frequency independent antenna concepts are discussed later in this chapter. Some newer concepts employing the idea of fractals are also discussed for a new class of wide band antennas.

Narrow-band antennas can be made to operate over several frequency bands by adding resonant circuits in series with the antenna wire. Such traps allow a dipole to be used at several spot frequencies, but the dipole still has a narrow band around the central operating frequency in each band.

Another technique for increasing the bandwidth of narrow-band antennas is to add parasitic elements, such as is done in the case of the open-sleeve antenna (Hall, 1992).

FIBER CLADDING AND COATING BASIC INFORMATION

0 comments

Fiber Cladding
The cladding is the layer of dielectric material that immediately surrounds the core of an optical fiber and completes the composite structure that is fundamental to the fiber’s ability to guide light. The cladding of telecommunications grade optical fiber is also made from silica glass, and is as critical in achieving the desired optical performance properties as the core itself.

For optical fiber to work, the core must have a higher index of refraction than the cladding or the light will refract out of the fiber and be lost. Initially multiple cladding diameters were available, but the industry swiftly arrived at a consensus standard cladding diameter of 125 μm, because it was recognized that a common size was needed for intermateability.

A cladding diameter of 125 μm is still the most common, although other fiber core and cladding size combinations exist for other applications. Because of their similar physical properties it is possible, and in fact highly desirable, to manufacture the core and cladding as a single piece of glass which cannot be physically separated into the two separate components.

It is the refractive index characteristics of the composite core-clad structure that guide the light as it travels down the fiber. The specific materials, design, and construction of these types of optical fibers make them ideally suited for use in transmitting large amounts of data over the considerable distances seen in today’s modern telecommunications systems.

Fiber Coating
The third section of an optical fiber is the outer protective coating. The typical diameter of an uncolored coated fiber is 245 μm, but, as with the core and cladding, other sizes are available for certain applications.

Coloring fibers for identification increases the final diameter to around 255 μm. The protective coating typically consists of two layers of an ultraviolet (UV) light cured acrylate that is applied during the fiber draw process, by the fiber manufacturer.

The inner coating layer is softer to cushion the fiber from stresses that could degrade its performance, while the outer layer is made much harder to improve the fiber’s mechanical robustness. This composite coating provides the primary line of physical and environmental protection for the fiber.

It protects the fiber surface to preserve the inherent strength of the glass, protects the fiber from bending effects, and simplifies fiber handling. The colored ink layer has properties similar to the outer coating, and is thin enough that its presence does not significantly affect the fiber’s mechanical or optical properties.

ECCM – RADAR PROBLEMS

0 comments

Jammers are typically barrage noise or repeater jammers. The former try to prevent all radar detections whereas the latter attempt to inject false targets to overload processing or attempt to pull trackers off the target.

A standoff jammer attempts to protect a penetrating aircraft by increasing the level of noise in the radar’s receiver. In such an environment, the radar should be designed with electronic counter countermeasures.

These can include adaptive receive antennas (e.g., adaptive array or sidelobe canceler), polarization cancelers (defeated easily by jammer using independent jamming on horizontal and vertical polarizations), sidelobe blankers to prevent false pulses through the sidelobes, frequency and prf agility to make life more difficult for the repeater jammer, low probability of intercept (LPI) waveforms, spread spectrum waveforms that will decorrelate CW jammers, spoofer waveform with a false frequency on the leading edge of the pulse to defeat set-on repeaters or a spoofer antenna having an EIRP that covers the sidelobes of the main antenna and masks the transmitted pulses in those directions, receiver uses CFAR/Dicke-fix, guard band blanking, excision of impulsive noise in time domain, and excision of narrow-band jammers via the frequency domain, etc.

In stressing cases, the radar can employ burn through (i.e., long dwells with noncoherent integration of pulses). Bistatic radars can also be used to avoid jamming. For example, a standoff (sanctuary) transmitter can be used with forward-based netted receive-only sensors [avoid antiradiation missiles (ARMs) and responsive jammers] to located targets via multilateration.

Ultralow sidelobe antennas can be complemented with remote ARM decoy transmitters that cover the radar’s sidelobes. Adaptive antennas include both adaptive arrays and sidelobe cancelers. The adaptive array includes a number of low-gain elements whereas the sidelobe canceler has a large main antenna and one or more low-gain auxiliary elements having sufficient gain margin to avoid carryover noise degradation.

The processing algorithms are either analog (e.g., Applebaum orWidrow LMS feedback) that can compensate for nonlinearities or are digital (sample matrix inversion or various eigenvector approaches including Gram–Schmidt and singular valved decomposition (SVD)). Systolic configurations have been implemented for increased speed using Givens rotations or Householder (conventional and hyperbolic) transformations.

In a sidelobe canceller (SLC) the jamming signal is received in the sidelobe of the main antenna as well as in the low-gain auxiliary element. By weighting the auxiliary signal to match that of the main antenna and setting the phase difference to 180◦, the auxiliary signal can be added to the main channel yielding cancellation of the jammer.

The weighting is determined adaptively since the main antenna is usually rotating. Target returns in the mainbeam are not canceled because they have much higher gain than their associated return in the auxiliary antenna. Since they are pulsed vs. the jammer being continuous, target returns have little effect in setting the adaptive weight. Since the closed-loop gain of an analog canceler is proportional to jamming level, the weights will converge faster on larger jammers creating an eigenvalue spread.

To prevent the loop from becoming unstable, receiver gains must be set for a given convergence time on the largest expected jammer. Putting limiters or AGC in the loops will minimize the eigenspread on settling time. The performance of jammer cancellation depends on the nulling bandwidth since the antenna pattern is frequency sensitive and the receivers may not track over the bandwidth (i.e., weights at one edge of the band may not yield good nulling at the other end of the band).

Broader bandwidth nulling is achieved through more advanced space-time processing; that is, channelize the spectrum into subbands that are more easily nulled or, equivalently, use adaptive tapped delay lines in each element to provide equalization of the bandpasses; that is, the adaptive filter for each element is frequency sensitive and can provide the proper weight at each frequency within the band.

A Frost constraint can be included in digital implementations to maintain beamwidth, monopulse slope, etc., of the adapted patterns. If the jammers are closely spaced, mainlobe nulling may be required. Nulling the jammer will cause some undesired nulling of the target as the jammer-target angular separation decreases.

This is limited by the aperture resolution. Difference patterns can be used as auxiliary elements with the sum beam. The adaptation will place nulls in the mainlobe of the sum pattern. They are actually more like conical scan where a difference pattern is added to a sum pattern to move the beam over.

The mainbeam squints such that the jammer is placed in the null on the side of the mainbeam. Better angular resolution can be achieved by nulling with two separated array faces. The adaptive pattern can now have sharp nulls that cancel jammers with minimal target loss since the angular resolution is set by the much wider interferometric baseline.

RADAR CLASSIFICATION AND IMAGING BASIC INFORMATION

0 comments

Classification
Many instrumentation and early warning/BMD radars perform object classification based on radar signature measurements, for example, sorting reentry vehicle (RV) vs. decoy. This is usually obtained through deceleration of the body by the atmosphere, wake effects (mean and spread), micro dynamic motion (nose tip precession), polarization, range profile, inverse synthetic aperture radar (ISAR) imaging, radar cross-section (rcs) statistics, etc.

Typical estimators include Bayesian approaches. The K factor describes the ability to resolve two types of objects, that is, the separation of their probability density functions normalized to the spread of the density function.

Many lightweight traffic decoys (e.g., balloons) can be placed on a post boost vehicle (PBV) by replacing an RV, but the ability of the lightweight decoy to penetrate the defense is less than that of a heavier replica decoy. Munkers algorithm can be used for optimally assigning objects seen on one sensor to those seen by another sensor, that is, handover or target object mapping (TOM).

Much work has been performed in the past in identifying or classifying battlefield vehicles (e.g., truck, jeep, tank) based on high-range resolution measurements. A priori measured range profiles at various angles can be stored to be matched against by an unknown object.

Sometimes features are extracted from the data such as spacing between largest spikes, order of magnitude of spikes, etc. Some of the work has involved the use of neural networks.

Imaging
The simplest imaging radars use high range resolution with Doppler processing, that is, FFTs within the range cells. For a rotating object, the Doppler frequency increases with distance from the axis of rotation and hence, maps cross range intoDoppler to produce two-dimensional ISAR images.

Range walk will limit Doppler resolution since it determines how many pulses can be processed in the Doppler filter. The crossrange resolution is related to the angle through which the target rotates during the coherent processing.

ISAR is similar to the conventional noncoherent tomography (radon transform, back projection) used in X-ray processing. Since it is only the relative motion between radar and target that is important, the turning object in ISAR is equivalent to a stationary target and a synthetic circular SAR, that is, aircraft flying a circle about the target.

More advanced ISAR imaging radars use polar processing to avoid the range walk problem. The most advanced imaging radars use extended coherent processing where an image is created by coherently overlaying images for several complete rotations of the object. Maximum entropy method(MEM) techniques can be used to extend the bandwidth to provide sharper images for a given actual RF bandwidth.

Airborne synthetic imaging radars (i.e., conventional SAR) use a small aperture on a moving platform. By storing the pulses and coherently combining them, a large synthetic array can be constructed that is focused at all ranges.

The effective synthetic pattern is actually a 2-way pattern and the cross-range resolution at every range is about the same as the size of the physical antenna on the aircraft. At each range, the phases from a scatterer produces a quadratic runout (i.e., LFM) that varies with range.

Each range cell is match filtered yielding a pulse compression in the azimuth direction. Since Doppler frequency is mapping into cross range, moving objects such as a train create a range-Doppler coupling and may image off the tracks.

Stereoscopic imaging can be performed by using SAR mapping from two aircraft ormultiple displaced apertures on the same aircraft. The phase difference in a common pixel for the two apertures will provide height data within the pixel.

RADAR CLASSIFICATION AND IMAGING BASIC INFORMATION

0 comments

Classification
Many instrumentation and early warning/BMD radars perform object classification based on radar signature measurements, for example, sorting reentry vehicle (RV) vs. decoy. This is usually obtained through deceleration of the body by the atmosphere, wake effects (mean and spread), micro dynamic motion (nose tip precession), polarization, range profile, inverse synthetic aperture radar (ISAR) imaging, radar cross-section (rcs) statistics, etc.

Typical estimators include Bayesian approaches. The K factor describes the ability to resolve two types of objects, that is, the separation of their probability density functions normalized to the spread of the density function.

Many lightweight traffic decoys (e.g., balloons) can be placed on a post boost vehicle (PBV) by replacing an RV, but the ability of the lightweight decoy to penetrate the defense is less than that of a heavier replica decoy. Munkers algorithm can be used for optimally assigning objects seen on one sensor to those seen by another sensor, that is, handover or target object mapping (TOM).

Much work has been performed in the past in identifying or classifying battlefield vehicles (e.g., truck, jeep, tank) based on high-range resolution measurements. A priori measured range profiles at various angles can be stored to be matched against by an unknown object.

Sometimes features are extracted from the data such as spacing between largest spikes, order of magnitude of spikes, etc. Some of the work has involved the use of neural networks.

Imaging
The simplest imaging radars use high range resolution with Doppler processing, that is, FFTs within the range cells. For a rotating object, the Doppler frequency increases with distance from the axis of rotation and hence, maps cross range intoDoppler to produce two-dimensional ISAR images.

Range walk will limit Doppler resolution since it determines how many pulses can be processed in the Doppler filter. The crossrange resolution is related to the angle through which the target rotates during the coherent processing.

ISAR is similar to the conventional noncoherent tomography (radon transform, back projection) used in X-ray processing. Since it is only the relative motion between radar and target that is important, the turning object in ISAR is equivalent to a stationary target and a synthetic circular SAR, that is, aircraft flying a circle about the target.

More advanced ISAR imaging radars use polar processing to avoid the range walk problem. The most advanced imaging radars use extended coherent processing where an image is created by coherently overlaying images for several complete rotations of the object. Maximum entropy method(MEM) techniques can be used to extend the bandwidth to provide sharper images for a given actual RF bandwidth.

Airborne synthetic imaging radars (i.e., conventional SAR) use a small aperture on a moving platform. By storing the pulses and coherently combining them, a large synthetic array can be constructed that is focused at all ranges.

The effective synthetic pattern is actually a 2-way pattern and the cross-range resolution at every range is about the same as the size of the physical antenna on the aircraft. At each range, the phases from a scatterer produces a quadratic runout (i.e., LFM) that varies with range.

Each range cell is match filtered yielding a pulse compression in the azimuth direction. Since Doppler frequency is mapping into cross range, moving objects such as a train create a range-Doppler coupling and may image off the tracks.

Stereoscopic imaging can be performed by using SAR mapping from two aircraft ormultiple displaced apertures on the same aircraft. The phase difference in a common pixel for the two apertures will provide height data within the pixel.

RADAR TRACKING BASIC INFORMATION

0 comments

Tracking involves both data association and the process of filtering, smoothing, or predicting. Data association involves determining the origin of the measurements (i.e., determine whether a return is a false alarm, clutter, or a valid target and assess which returns go with which tracks or is this the first return from a given target).

Given that the return is properly associated, an algorithm is needed to include this latest measurement in a manner that will improve the estimate of the next expected position of the target. Early trackers, such as the alpha-beta-gamma filter, used precomputed fixed gains that were sometimes changed based on maneuver detection.

They were simple to code and required small amounts of memory and throughput. As tracking advanced, radars began to use the EKF.Many filter states were used in ballistic missile defense (BMD) and early warning trackers.

More modern tracking approaches use non uniformly scheduled pulses, Kalman filtering of multiple sensors, nonlinear filters, interacting multiple model (IMM), joint probabilistic data association (JPDA), and multiple hypothesis tracking (MHT).

Decision-directed techniques, such as MHT, can result in a growing memory that must be pruned as possibilities are deemed unlikely. As target density or clutter increases, many false tracks can initiate but over time it becomes obvious which are actual and which are bogus.

The Hough transform can be used to track/detect straight line trajectories or those generalized for curvature. Phased arrays provide flexibility for minimizing the energy to track targets with a given accuracy or impact point prediction (IPP).

Options include revisit interval, dwell time, and beam width spoiling. A tracking radar can use a high prf that avoids range blindness on the tracked target while providing ample Doppler space free of clutter.

If a search radar is ambiguous in range, a different prf must be used on each dwell to resolve range ambiguities. If a target is within the unambiguous range interval, the range cell where the detection occurs does not change. If the target is beyond the range ambiguity distance, the range cell number changes due to the range fold over.

The Chinese remainder theorem can be used to unravel the true range based on several ambiguous range measurements. The range estimate, however, can be grossly in error if an unambiguous range cell number is off by a single range cell due to measurement noise.

Other approaches of resolving range ambiguities avoid this problem. For example, the entire instrumented range can be laid out for each dwell with a return placed at all corresponding ambiguous range cells.

By summing the dwells in each range cell, the one with the highest count will be the true range since they all occur in this cell. To prevent errors due to a slight range error, one can sum both the range cell and its adjacent neighboring cells over all dwells.

This will ensure that slight misses will be correctly counted in the summation. Methods that resolve range ambiguities for a point target are not very effective for a weather radar where the target is distributed. Multiple targets can produce ghosts when unraveling unambiguous ranges.

RADAR ACCURACY AND RESOLUTION BASIC INFORMATION

0 comments

Accuracy relates to a measurement or prediction being close to the true value of target parameters. Precision relates to the fineness of the measurements, which may not be very accurate, but could be quite precise.

Target parameters for which accuracy is important include range, angle, Doppler, and amplitude. Accuracy varies as a function of range. At long range, thermal noise effects tend to dominate.

At intermediate ranges, accuracy is dominated by the instrumentation errors (relatively constant vs. range). At short ranges, angle glint effects can dominate since the angular extent of the target increases inversely with range.

The accuracy of a given measurement due to thermal noise is given by σ = K/√SNR where K has the same dimensions as the measurement, but is also inversely proportional to the effective width in the other domain of a Fourier transform (FT) pair (i.e., range or time has frequency or bandwidth as its FT pair).

Hence, the K for a range measurement is inversely proportional to bandwidth and the K for a Doppler frequency measurement is inversely proportional to time extent of the waveform, that is, takes a long time to discern small differences in frequency.

Since an antenna pattern is the FT of its aperture distribution, the K for angle accuracy is inversely proportional to effective aperture width. Resolution pertains to the question: Is there one target present or many? If two targets are resolved in range (i.e., well separated compared to the compressed pulse width), there will be two distinct returns.

As the targets get closer together, the returns begin to merge such that it is difficult to tell if there is one or two since the thermal noise tends to distort the combination. The presence of a dip between them yielding two peaks will depend on the relative phases of the two pulses.

Typical resolution algorithms include the classical inflection or dip approach, as well as template matching algorithms that look for differences compared to the known response of a single point target. Multipath and thermal noise will affect the probability of correctly resolving two targets in range when two targets are present as well as the probability of false splits (i.e., claiming that two targets are present when only one is actually present).

Similar algorithms are used for resolution in angle when the beam scans past the target. If frequency diversity is used on different prfs, this will cause the amplitude to fluctuate as the beam scans past the target making it even more difficult to determine whether there is one or two targets present.

With a monopulse radar, one can examine the imaginary part of the complex mono pulse ratio to determine if more than one target is present. A single target creates a quadrature value of zero, and multiple targets can create a nonzero value.

APPLICATIONS OF RADIO DETECTION AND RANGING (RADAR)

0 comments

Radars can be classified by frequency band, use, or platform, for example, ground based, shipborne, airborne, or spaceborne. Radars generally operate in the microwave regime although HF over-the horizon (OTH) radars such as JINDALEE, OTHB, and ROTHR use similar principles in bouncing signals from the ionosphere to achieve long-range coverage.

Radars are often denoted by the letter band of operation, for example, L-band (1–2 GHz), S-band (2–4 GHz), C-band (4–8 GHz), and X-band (8–12 GHz). Some classifications of radar are based on propagation mode (e.g., monostatic, bistatic, OTH, underground) or on scan method (mechanical, electronic, multibeam).

Other classifications of radar are based on the waveform and processing, for example, pulse Doppler (PD), continuous wave (CW), FM/CW, synthetic aperture radar (SAR) or impulse (wideband video).

Radars are often classified by their use: weather radar, police speed detection, navigation, precision approach radar, airport surveillance and air route surveillance, radio astronomy, fire control and weapon direction, terrain mapping and avoidance, missile fuzing, missile seeker, foliage penetration, subsurface or ground penetrating, acquisition, orbital debris, range instrumentation, imaging (e.g., SAR/ISAR), etc.

Search (or surveillance) radars are concerned with detection of targets out to long range and low elevation angles to allow adequate warning on pop-up low-flying targets (e.g., sea skimmers). Since the search radar is more concerned with detection (i.e., presence or absence of targets) and can accommodate cruder accuracy in estimating target parameters such as azimuth angle, elevation angle, and range, search radars tend to have poorer range and angle accuracy than tracking radars.

The frequency tends to be lower than track radars since RF power and antenna aperture are less expensive and frequency stability is better. Broad beams (e.g., fan beam) allow faster search of the volume.

To first order, the radar search performance is driven by the power-aperture product (PA) to search the volume with a given probability of detection (PD) in a specified frame time. PA actually varies slightly in that to maintain a fixed false alarm rate per scan, more beam positions offer more opportunities for false alarms and, hence, the detection threshold must be raised, which increases the power to achieve the specified PD.

With a phased-array antenna (i.e., electronically scanned beam), the probability of false alarm can be optimized by setting a high false alarm in the search beam and using a verify beam with higher threshold to confirm whether a search detection was an actual target or just a false alarm.

The lower threshold in search allows less search power with some fraction of beams requiring the extra verify beams. The net effect on total required transmit power can be a reduction using this optimization technique.

Search radars tend to use a fan beam or stacked receive beams to reduce the number of beam positions allowing more time in the beam for coherent processing to reduce clutter. Fill pulses are sometimes used to allow good clutter cancellation on second- or higher time-around clutter returns.

Track radars tend to operate at higher frequency and have better accuracy, that is, narrower beams and high range resolution. Simple radars track a single target with an early–late range tracker, Doppler speed gate, and conical scan or sequential lobing.More advanced angle trackers use monopulse or conical scan on receive only (COSRO) to deny inverse modulation by repeater jammers.

The multifunction phased-array radar can be programmed to conduct searches with track beams assigned to individual detected targets. The tracks are maintained in track files. If time occupancy becomes a problem, the track pulses can be machine gunned out at the targets in range order, and on receive they are gathered in one after the other since the track window on each target is quite small.

In mechanically rotated systems, track is often a part of search, for example, track-while-scan (TWS). A plot extractor clusters the primitive returns in range Doppler angle from a given target to produce a single plot.

The plots are associated with the track files using scan-to-scan correlation gates. The number of targets that can be handled in a TWS system is limited by data processing rather than track power.

WHAT IS RADAR? RADIO DETECTION AND RANGING BASIC INFORMATION

0 comments

Radar is an acronym for radio detection and ranging as these were primary functions during the early use of radar. Radars can also measure other target properties such as range rate (Doppler), angular location, amplitude statistics, and polarization scattering matrix.

In its simplest form, a radar propagates a pulse from an antenna to a target. The target reflects the pulse in many directions with some of the energy back scattered toward the radar.

The radar return is received by the radar and subjected to processing to allow its detection. Since the pulse travels at approximately the speed of light, the distance to the target can be determined based on the round trip time delay.

Reflections from undesired targets are known as clutter and often include terrain, rain, man-made objects, etc. Usually, the radar will have a narrow beam so that the angular location of the target (i.e., azimuth and elevation) can also be determined by some technique such as locating the centroid of the target returns as the beam scans across the target or by comparing the signals received simultaneously or sequentially by different antenna patterns or overlapped beams.

The radial velocity of the target can be determined by differencing the range measurements. Since the range measurements may not be very accurate, better range rate accuracy can be obtained by coherently measuring the Doppler frequency; that is, phase change from pulse-to-pulse in a given range cell.

At microwave frequencies, the wavelength is quite small and, hence, small changes in range are readily detected. Generally, frequency is measured by using a pulse Doppler filter bank, pulse pair processing, or a CW frequency discriminator. Coherently measuring the frequency is also a good way for filtering moving targets from stationary or slowly moving clutter.

Radar parameters vary with the type of radar. Typically, the radar transmitted pulse width is 1 to 100 μs with a pulse repetition frequency (prf) of 200 Hz –10 KHz. If the antenna is a mechanically rotated reflector, it generally rotates 360◦ in azimuth at about 12–15 r/min. If the two-way time delay is T, the range to the target is R = 0.5T × c, where c = 3 × 108 m/s is the velocity of light.

VELOCITY TRANSDUCERS BASIC INFORMATION AND TUTORIALS

0 comments

Signal conditioning techniques make it possible to derive all motion measurements displacement, velocity, or acceleration—from a measurement of any one of the three. Nevertheless, it is sometimes advantageous to measure velocity directly, particularly in the cases of short-stroke rectilinear motion or high-speed shaft rotation.

The analog transducers frequently used to meet these two requirements are
- Magnet-and-coil velocity transducers
- Tachometer generators

A third category of velocity transducers, Counter-type velocity transducers, is simple to implement and is directly compatible with digital controllers.

The operation of magnet-and-coil velocity transducers is based on Faraday’s law of induction. For a solenoidal coil with a high length-to-diameter ratio made of closely spaced turns of fine wire, the voltage induced into the coil is proportional to the velocity of the magnet.

Magnet-and-coil velocity transducers are available with strokes ranging from less than 10 mm to approximately 0.5 m.

A tachometer generator is, as the name implies, a small AC or DC generator whose output voltage is directly proportional to the angular velocity of its rotor, which is driven by the controlled output shaft. Tachometer generators are available for shaft speeds of 5000 r/min, or greater, but the output may be nonlinear and there may be an unacceptable output voltage ripple at low speeds.

AC tachometer generators are less expensive and easier to maintain thanDC tachometer generators, but DC tachometer generators are directly compatible with analog controllers and the polarity of the output is a direct indication of the direction of rotation.

The output of an AC tachometer generator must be demodulated (i.e., rectified and filtered), and the demodulatormust be phase sensitive in order to indicate direction of rotation. Counter-type velocity transducers operate on the principle of counting electrical pulses for a fixed amount of time, then converting the count per unit time to velocity.

Counter-type velocity transducers rely on the use of a proximity sensor (pickup) or an incremental encoder. Proximity sensors may be one of the following types:

- Electro-optic
- Variable reluctance
- Hall effect
- Inductance
- Capacitance

Since a digital controller necessarily includes a very accurate electronic clock, both pulse counting and conversion to velocity can be implemented in software (i.e., made a part of the controller program). Hardware implementation of pulse counting may be necessary if time-intensive counting would divert the controller from other necessary control functions.

A special-purpose IC, known as a quadrature decoder/counter interface, can perform the decoding and counting functions and transmit the count to the controller as a data word.

COAXIAL TRANSMISSION LINES SKIN EFFECT BASIC INFORMATION

2 comments

The components that connect, interface, transfer, and filter RF energy within a given system or between systems are critical elements in the operation of vacuum tube devices. Such hardware, usually passive, determines to a large extent the overall performance of the RF generator.

To optimize the performance of power vacuum devices, it is first necessary to understand and optimize the components upon which the tube depends. The mechanical and electrical characteristics of the transmission line, waveguide, and associated hardware that carry power from a power source (usually a transmitter) to the load (usually an antenna) are critical to proper operation of any RF system.

Mechanical considerations determine the ability of the components to withstand temperature extremes, lightning, rain, and wind, that is, they determine the overall reliability of the system.

The effective resistance offered by a given conductor to radio frequencies is considerably higher than the ohmic resistance measured with direct current. This is because of an action known as the skin effect, which causes the currents to be concentrated in certain parts of the conductor and leaves the remainder of the cross-section to contribute little or nothing toward carrying the applied current.

When a conductor carries an alternating current, a magnetic field is produced that surrounds the wire. This field continually expands and contracts as the ac wave increases from zero to its maximum positive value and back to zero, then through its negative half-cycle.

The changing magnetic lines of force cutting the conductor induce a voltage in the conductor in a direction that tends to retard the normal flow of current in the wire. This effect is more pronounced at the center of the conductor.

Thus, current within the conductor tends to flow more easily toward the surface of the wire. The higher the frequency, the greater the tendency for current to flow at the surface. The depth of current flow d is a function of frequency and is determined from the following equation:

d = 2.6/ (μf)

where d is the depth of current in mils, μ is the permeability (copper=1, steel=300), and f is the frequency of signal in MHz. It can be calculated that at a frequency of 100 kHz, current flow penetrates a conductor by 8 mils.

At 1 MHz, the skin effect causes current to travel in only the top 2.6 mils in copper, and even less in almost all other conductors. Therefore, the series impedance of conductors at high frequencies is significantly higher than at low frequencies.

When a circuit is operating at high frequencies, the skin effect causes the current to be redistributed over the conductor cross-section in such a way as to make most of the current flow where it is encircled by the smallest number of flux lines. This general principle controls the distribution of current, regardless of the shape of the conductor involved.

With a flat-strip conductor, the current flows primarily along the edges, where it is surrounded by the smallest amount of flux.

GIGABIT ETHERNET MEDIA HANDLING CAPABILITIES AND SUPPORT BASIC INFORMATION

0 comments

Gigabit Ethernet represents an extension to the 10 Mbps and 100 Mbps IEEE 802.3 Ethernet standards. Providing a data transmission capability of 1000 Mbps, Gigabit Ethernet supports the CMSA/CD access protocol, which makes various types of Ethernet networks scalable from 10 Mbps to 1 Gbps.

Similar to 10BASE-T and Fast Ethernet, Gigabit Ethernet can be used as a shared network through the attachment of network devices to a 1 Gbps repeater hub providing shared use of the 1 Gbps operating rate or as a switch, the latter providing 1 Gbps ports to accommodate high-speed access to servers while lower operating rate ports provide access to 10 Mbps and 100 Mbps workstations and hubs. Although very few organizations can be expected to require the use of a 1 Gbps shared media network.

Similar to the recognition that Fast Ethernet would be required to operate over different types of media, the IEEE 802.3z committee recognized that Gigabit Ethernet would also be required to operate over multiple types of media.

This recognition resulted in the development of a series of specifications, each designed to accommodate different types of media. Thus, any discussion of Gigabit Ethernet involves an examination of the types of media the technology supports and how it provides this support.

There are five types of media supported by Gigabit Ethernet – single-mode fiber, multi-mode fiber, short runs of coaxial cable or shielded twisted pair, and longer runs of unshielded twisted pair.

The actual relationship of the Gigabit 802.3z reference model to the ISO Reference Model is very similar to Fast Ethernet. Instead of a Medium Independent Interface (MII), Gigabit Ethernet uses a Gigabit Media Independent Interface (GMII). The GMII provides the interconnection between the MAC sublayer and the physical layer to include the use of an 8-bit data bus that operates at 125MHZ plus such control signals as transmit and receiver clocks, carrier indicators and error conditions.

BIT ERROR RATE TESTER BASIC INFORMATION AND TUTORIALS

1 comments

To determine the bit error rate, a device called a bit error rate tester (BERT) is used. Bit error rate testing (BERT) involves generating a known data sequence into a transmission device and examining the received sequence at the same device or at a remote device for errors.

Normally, BERT testing capability is built into another device, such as a ‘sophisticated’ break-out box or a protocol analyzer; however, several vendors manufacture hand-held BERT equipment. Since a BERT generates a sequence of known bits and compares a received bit stream to the transmitted bit stream, it can be used to test both communications equipment and line facilities.

You would employ a BERT in the same manner to determine the bit error rate on a digital circuit, with the BERT used with CSU/DSUs instead of modems. The modem closest to the terminal into a loop can be used to test the modem. Since a modem should always correctly modulate and demodulate data, if the result of the BERT shows even one bit in error, the modem is considered to be defective.

If the distant modem is placed into a digital loop-back mode of operation where its transmitter is tied to its receiver to avoid demodulation and remodulation of data the line quality in terms of its BER can be determined. This is because the data stream from the BERT is looped back by the distant modem without that modem functioning as a modem.

Since a leased line is a pair of two wires, this type of test could be used to determine if the line quality of one pair was better than the other pair. On occasion, due to the engineering of leased lines through telephone company offices and their routing into microwave facilities, it is quite possible that each pair is separated by a considerable bandwidth.

Since some frequencies are more susceptible to atmospheric disturbances than other frequencies, it becomes quite possible to determine that the quality of one pair is better than the other pair. In one situation the author is aware of, an organization that originally could not transmit data on a leased line determined that one wire pair had a low BER while the other pair had a very high BER.

Rather than turn the line over to the communications carrier for repair during the workday, this organization switched their modems from fullduplex to half-duplex mode of operation and continued to use the circuits. Then after business hours they turned the circuit over to the communications carrier for analysis and repair.

SCEINTIFIC ATLANTA CABLE MODEM BASIC INFORMATION

0 comments

In this examination of cable modems, we will focus upon the asymmetric architecture of a Scientific Atlanta cable modem. The Scientific Atlanta cable modem we will examine is based upon an asymmetric design, using QAM in a 6MHz downstream channel to obtain an operating rate of 27 MHz.

In the opposite direction the modem uses QPSK modulation to provide an operating rate of 1.5 Mbps upstream. The modem supports downstream frequencies in the 54 to 750MHz spectrum and frequencies in the 14MHz to 26.5MHz range for upstream communications.

The Scientific Atlanta cable modem’s modulation method was proposed to the IEEE 802.14 Working Group and became the basis for use in both the IEEE standard and the DOCSI specification. Scientific Atlanta noted that QAM is non-proprietary and was previously selected as the European Telecommunications Standard.

In the firm’s proposal, two levels of modulation based upon 64 QAM and 256 QAM were defined to permit implementation flexibility. The standardization of QAM for downstream transmission results in a signaling rate of 5MHz using a carrier frequency between 151MHz and 749MHz spaced 6MHz apart to correspond to TV channel assignments.

The use of a 5MHz signaling rate and 64 QAM which enables six bits to be encoded in one signal change permits a transmission rate of 6 bits/symbol#5 MHz,or 30 Mbps. In comparison, the use of 256 QAM results in the packing of eight bits per signal change, resulting in a transmission rate of 8 bits/signal change#5 MHz,or 40 Mbps.

Through the use of forward error coding, the data rate throughput is slightly reduced from the modem’s operating rate to 35.504 Mbps for 256 QAM and 27.37 Mbps for 64 QAM. This reduction results from extra parity bits becoming injected into the data stream to provide the forward error detection and correction capability.

MICROCOM NETWORKING PROTOCOL (MNP) CLASSES BASIC INFORMATION

1 comments

Class1 The lowest performance level. Uses an asynchronous byte-oriented half-duplex method of exchanging data. The protocol efficiency of a Class 1 implementation is about 70% (a 2400 bps modem using MNP Class 1 will have a 1690 bps throughput).

Class 2 Uses asynchronous byte-oriented full-duplex data exchange. The protocol efficiency of a Class 2 modem is about 84% (a 2400 bps modem will realize a 2000 bps throughput).

Class 3 Uses synchronous bit-oriented full-duplex data exchange. This approach is more efficient than the asynchronous byte-oriented approach, which takes 10 bits to represent 8 data bits because of the ‘start’ and ‘stop’ framing bits. The synchronous data format eliminates the need for start and stop bits. Users still send data asynchronously to a Class 3 modem but the modems communicate with each other synchronously. The protocol efficiency of a Class 3 implementation is about 108% (a 2400 bps modem will actually run at a 2600 bps throughput).

Class 4 Adds two techniques: Adaptive Packet Assembly and Data Phase Optimization. In the former technique, if the data channel is relatively error-free, MNP assembles larger data packets to increase throughput. If the data channel is introducing many errors, then MNP assembles smaller data packets for transmission. Although smaller data packets increase protocol overhead, they concurrently decrease the throughput penalty of data retransmissions, so more data are successfully transmitted on the first try.

Data Phase Optimization eliminates some of the administrative information in the data packets, which further reduces protocol overhead. The protocol efficiency of a Class 4 implementation is about 120% (a 2400 bps modem will effectively yield a throughput of 2900 bps).

Class 5 This class adds data compression, which uses a real-time adaptive algorithm to compress data. The real-time capabilities of the algorithm allow the data compression to operate on interactive terminal data as well as on file transfer data. The adaptive nature of the algorithm allows it to analyze user data continuously and adjust the compression parameters to maximize data throughput.

The effectiveness of the data compression algorithm depends on the data pattern being processed. Most data patterns will benefit from data compression, with performance advantages typically ranging from 1.3 to 1.0 and 2.0 to 1.0,although some files may be compressed at an even higher ratio. Based on a 1.6 to 1 compression ratio, Microcom gives Class 5 MNP a 200% protocol efficiency, or 4800 bps throughput in a 2400 bps modem installation.

Class 6 This class adds 9600 bps V.29 modulation, universal line negotiation, and statistical duplexing to MNP Class 5 features. Universal link negotiation allows two unlike MNP Class 6 modems to find the highest operating speed (between 300 and 9600 bps) at which both can operate. The modems begin to talk at a common lower speed and automatically negotiate the use of progressively higher speeds.

Statistical duplexing is a technique for simulating full-duplex service over half-duplex, high-speed carriers. Once the modem link has been established using full-duplex V.22 modulation, user data streams move via the carrier’s faster half-duplex mode. However, the modems monitor the data streams and allocate each modem’s use of the line to best approximate a full-duplex exchange. Microcom claims that a 9600 bps V.29 modem using MNP Class 6 (and Class 5 data compression) can achive 19.2 kbps throughput over dial circuits.

Class 7 Uses an advanced form of Huffman encoding called Enhanced Data Compression. Enhanced Data Compression has all the characteristics of Class 5 compression, but in addition predicts the probability of repetitive characters in the data stream. Class 7 compression, on the average, reduces data by 42%.

Class 8 Adds CCITT V.29 Fast-Train modem technology to Class 7 Enhanced Data Compression, enabling half-duplex devices to emulate full-duplex transmission.

Class 9 Combines CCITT V.32 modem modulation technology with Class 7 Enhanced Data Compression, resulting in a full-duplex throughput that can exceed that obtainable with a V.32 modem by 300%. Class 9 also employs selective retransmission, in which errors packets are retransmitted, and piggybacking, in which acknowledgment information is added to the data.

Class 10 Adds Adverse Channel Enhancement (ACE),which optimizes modem performance in environments with poor or varying line conditions, such as cellular communications, rural telephone service,and some international connections.

Adverse Channel Enhancements fall into five categories:

Negotiated Speed Upshift: modem handshake begins at the lowest possible modulation speed, and when line conditions permit, the modem upshifts to the highest possible speed.

Robust Auto-Reliable Mode: enables MNP10 modems to establish a reliable connection during noisy call set-ups by making multiple attempts to overcome circuit interference. In comparison,other MNP classes make only one call set-up attempt.

Dynamic Speed Shift: causes an MNP10 modem to adjust its operating rate continuously throughout a session in response to current line conditions.

Aggressive Adaptive Packet Assembly: results in packet sizes varying from 8 to 256 bytes in length. Small data packets are used during the establishment of a link, and there is an aggressive increase in the size of packets as conditions permit.

Dynamic Transmit Level Adjustment (DTLA): designed for cellular operations, DTLA results in the sampling of the modem’s ransmit level and its automatic adjustment to optimize data throughput.

AMPLITUDE MODULATED RADIO-FREQUENCY BANDS CLASSIFICATION

0 comments

Amplitude modulated radio frequencies are grouped into three bands according to the wavelength of their carrier frequencies. The carrier frequency chosen depends to a large extent on the distance between the broadcasting station and the target listeners.

1. Long wave (low frequency).
All transmission whose carrier frequencies are less than 400 kHz are generally classified as long wave. At a frequency of 100 kHz, a quarter-wavelength antenna is 750 meters high.

Such an antenna poses several problems such as vulnerability to high winds and danger to low flying aircraft. Long wave broadcasting stations therefore use an electromagnetically short antenna which necessarily limits their reach to a few tens of kilometers because the short antenna has only the ground wave.

2. Medium wave.
Carrier frequencies in the range 300 kHz to 3MHz are regarded as medium wave. The height of the antenna becomes more manageable and the possibility of using the sky wave to reach distant audiences is a reality. Generally, it is used for local area broadcasting.

3. Short wave.
Short wave generally refers to carrier frequencies between 3MHz and 30MHz. The wavelengths under consideration are between 100 meters and 1 meter. Antenna structures can be constructed to give specified directional properties.

Most of the energy can be put into the sky wave and the signal can be bounced off the ionosphere (the layer of ionized gas that surrounds the Earth) to reach receivers halfway round the world. A very severe problem is encountered in short wave transmission, that is, the signal tends to fade from time to time.

This phenomenon is caused by the multiple paths by which the signal can reach the receiver. It is clear that if two signals reach the receiver by different paths such that their phase angles are 180 degrees apart they will cancel each other.

The ionosphere sometimes experiences severe turbulence due mainly to radiation from the Sun. Short wave transmission is therefore at its best during the hours of darkness.

CLOUD COMPUTING STRATEGIC BUSINESS AND FINANCIAL IMPLICATIONS

0 comments

The challenging economy made the cloud computing conversation especially relevant. The business and financial potential of cloud makes it a special trend for us to embrace. We will delve deeper into the full range of business and financial benefits later. The strategic business and financial implications of cloud are the focus of this article.

First and foremost,  with cloud computing, we have another avenue for realizing business agility, the Holy Grail of all business strategies. As with all technology trends, business agility is probably the most frequently mentioned goal of business and technology executives when they describe their strategies, and yet it remains the least realized in terms of execution.

We could even go so far as to say that a clearly articulated business or technology strategy that can deliver on that promise, that is clearly articulated, and has been incorporated into daily operations can seem as elusive as any mythological beast. Fortunately, this opportunity truly is different.

Cloud computing offers business agility in a simple, clearly understandable model: For a new startup or for emergent business requirements of established enterprises, cloud computing allows an organization to implement a rapid time-to-market model by securely accessing a ready-to-use IT infrastructure environment, hosted and managed by a trusted third party, with right-sized, scalable computing, network and storage capability, that we pay for only as we use it and based on how much of it we use. Hmmm, let me think about this a while . . . NOT!!!

We do not have to build or expand our data center (no construction of buildings, raised floor, energy and cooling equipment, building automation and monitoring equipment, and no staff); we do not have to buy any hardware, software, or network infrastructure (no dealing with the procurement hassles we are so accustomed to, especially with the inevitable delays in IT acquisition); we can rapidly implement a new business model or start a new company to address a new market need far faster than we normally could have; and we do not have to continue to pay for the cloud infrastructure and resources if we discontinue the project or if the company fails.

From a business and IT executive’s perspective, what is not to like about this business vignette?

There are countless new startup firms that have leveraged cloud computing models to obtain their IT infrastructure as a service, therefore enabling them to focus their limited funds and resource bandwidth on their unique technology and business model innovation.

Resource constraints are liberating in this sense, since they force new startups to leverage ready-to-use cloud resources as opposed to building a data center. These types of scenarios, of course, raise a number of business and financial implications that must be explored further.

DOUBLE CONVERSION UPS SYSTEM BASIC INFORMATION

0 comments

Double-conversions systems are characterized by their topology. In these systems, the incoming line is first converted to dc. The dc then provides input power to a dc-to-ac converter (i.e., an inverter). The inverter output is ac, which is used to power the critical load.

Many different types of inverters are used, each employing a variant of available technology. (Note that the recently revised NEMA PE 1-1993 [B23], identifies the double-conversion system as a “rectifier inverter.”)

Historically, the double-conversion UPS has found the most prominence in the industry. The double conversion UPS system has been available for many years and has proven to be reliable when operated within its design limits.

This type of system is the static electrical equivalent to the motor-generator set. The battery is connected in parallel with the dc input to the inverter, and provides continuous power to it any time the incoming line is outside of its specification or fails.

Switching to the battery is automatic, with no break in either the input to the inverter or the output from it.

The double-conversion system has several advantages:

— It provides excellent frequency stability.
— There is a high degree of isolation from variations in incoming line voltage and frequency.
— A zero transfer time is possible.
— Operation is relatively quiet.
— Some systems can provide a sinusoidal output waveform with low distortion.

In the lower power UPS applications (0.1–20 kW), the double-conversion UPS has the following disadvantages. (Many of these disadvantages can be minimized if the system is carefully specified to use the latest topologies.)

— There is lower overall efficiency.
— A large dc power supply is required (typically, 1.5 times the full rated load rating of the UPS).
— Noise isolation line to load can be poor.
— There is greater heat dissipation, which may affect the service life of the UPS.

In addition, if the inverter is the pulse width modulated type, the high-frequency circuitry may produce electromagnetic interference (EMI). This may require special filtering and shielding to protect sensitive equipment from radiated and conducted interference.

The double-conversion UPS may also produce excessive battery ripple current, possibly resulting in reduced battery life (see IEEE Std 1184-1994).

INFRARED TRANSDUCERS BASIC INFORMATION AND TUTORIALS

0 comments

Many wireless devices transmit and receive energy at infrared (IR) wavelengths, rather than at radio wavelengths. Infrared energy has a frequency higher than that of radio waves, but lower than that of visible light.

Infrared is sometimes called heat radiation, but this is a misnomer. Some wireless devices transmit and receive their signals in the visible-light range, although these are encountered much less often than IR devices.

The most common IR transmitting transducer is the infrared-emitting diode (IRED). A fluctuating direct current is applied to the IRED. The current causes the device to emit IR rays; the fluctuations in the current constitute the modulation, and produce rapid variations in the intensity of the rays emitted by the semiconductor junction.

The modulation contains information, such as which channel your television set should seek, or whether the volume is to be raised or lowered. Infrared energy is not visible, but at some wavelengths it can be focused by ordinary optical lenses and reflected by ordinary optical mirrors.

This makes it possible to collimate IR rays (make them essentially parallel) so they can be transmitted for distances up to several hundred feet. Infrared receiving transducers resemble photodiodes or photovoltaic cells.

The only real difference is that the diodes are maximally sensitive in the IR, rather than in the visible, part of the electromagnetic spectrum. The fluctuating IR energy from the transmitter strikes the P/N junction of the receiving diode.

If the receiving device is a photodiode, a current is applied to it, and this current varies rapidly in accordance with the signal waveform on the IR beam from the transmitter. If the receiving device is a photovoltaic cell, it produces the fluctuating current all by itself, without the need for an external power supply.

In either case, the current fluctuations are weak, and must be amplified before they are delivered to whatever equipment (television set, garage door, oven, security system, etc.) is controlled by the wireless system.

Infrared wireless devices work best on a line of sight, that is, when the transmitting and receiving transducers are located so the rays can travel without encountering any obstructions. You have probably noticed this when using television remote control boxes, most of which work at IR wavelengths.

Sometimes enough energy will bounce off the walls or ceiling of a room to let you change the channel when the remote box is not on a direct line of sight with the television set. But the best range is obtained by making sure you and the television set can “see” each other.

You cannot put an IR control box in your pants pocket and expect it to work. Radio and IR control boxes are often mistaken for one another because they look alike to the casual observer.

RADIO FREQUENCY TRANSDUCERS BASIC INFORMATION

0 comments

The term radio-frequency (RF) transducer is a fancy name for an antenna. Antennas are so common that you probably don’t think about them very often. Your car radio has one.

Your portable headphone radio, which you might use while jogging on a track (but never in traffic), employs one. Cellular and cordless telephones, portable television receivers, and handheld radio transceivers use antennas.

Hundreds of books have been written on the subject. There are two basic types of RF transducer: the receiving antenna and the transmitting antenna.

A receiving antenna converts electromagnetic (EM) fields, in the RF range from about 9 kHz to several hundred gigahertz, into ac signals that are amplified by the receiving apparatus. A transmitting antenna converts powerful alternating currents into EM fields, which propagate through space.

There are a few significant differences between receiving antennas and transmitting antennas designed for a specific radio frequency. The efficiency of an antenna is important in transmitting applications, but not so important in reception.

Efficiency is the percentage of the power going into a transducer that is converted into the desired form. If the input power to a transducer is Pin watts and the output power is Pout watts, the efficiency in percent, Eff%, can be found using the following equation:
Eff% = 100 Pout /Pin


In a transmitting antenna, 75 W of RF power are delivered to the transducer, and 62 W are radiated as an EM field. What is the efficiency of the transducer?

To solve this problem, plug the numbers into the formula. In this particular case, Pin  75 and Pout  62. Therefore, Eff% = 100  62/75  100  0.83  83 percent

Another difference between transmitting and receiving antennas is the fact that, for any given frequency, transmitting antennas are often larger than receiving antennas. Transmitting antennas are also more critical as to their location.

Whereas a small loop or whip antenna might work well indoors in a portable radio receiver for the frequencymodulation (FM) broadcast band, the same antenna would not function well at the broadcasting station for use with the transmitter.

Still another difference between transmitting and receiving antennas involves power-handling capability. Obviously, very little power strikes the antenna in a wireless receiver; it can be measured in fractions of a microwatt.

However, a transmitter might produce kilowatts or even megawatts of output power. A small loop antenna, for example, would get hot if it were supplied with 1 kW of RF power; if it were forced to deal with 100 kW, it would probably melt.

Related Posts Plugin for WordPress, Blogger...

ARTICLES