What type of traffic is described as having a high volume of data per packet?

In the remainder of this section we discuss the characteristics of these traffic types, and also their quality of service (QoS) requirements.

Elastic Traffic

Consider a data file, residing on the disk of the server E (shown in Figure 3.1) that needs to be transferred to the disk of a portable computer attached to the Internet (e.g., the laptop D, which connects via a WLAN), or to the memory of the cell phone C. Although the human (or some machine application) that wishes to achieve this file transfer would like to have the transfer completed in, say, a second or two, the source of data itself does not demand any specific transfer rate. If the data transfer does not lose data, no matter how fast or slow it is (but as long as the rate is positive), the file will sooner or later get transferred to the destination device. We say that, from the point of view of the network, this source of traffic is elastic. Many store-and-forward services (with the exception of media streaming services) are elastic; e.g., file transfer, WWW download, electronic mail (e-mail). In this list, the first two are distinguished by the fact that they are nondeferable (i.e., the network should initiate the transfer immediately), whereas e-mail is deferable.

We observe that elastic traffic does not have an intrinsic temporal behavior, and can be transported at arbitrary transfer rates. Thus the following are the QoS requirements of elastic traffic.

Transfer delay and delay variability can be tolerated. An elastic transfer can be performed over a wide range of transfer rates, and the rate can even vary over the duration of the transfer.

The application cannot tolerate data loss. This does not mean, however, that the network cannot lose any data. Packets can be lost in the network (owing to uncorrectable transmission errors or buffer overflows) provided that the lost packets are recovered by an automatic retransmission procedure. Thus effectively the application would see a lossless transport service. Since elastic sources do not require delay guarantees, the delay involved in recovering lost packets can be tolerated.

In practice, of course, users will not tolerate arbitrarily poor throughput, high throughput variability, and large delays. Hence a network carrying elastic traffic will need to manage its resource-sharing mechanisms in a way such that some minimum level of throughput is provided. Further, some sort of fairness must also be ensured between the ongoing elastic transfers.

Elastic traffic can also be carried over circuit multiplexed networks (e.g., the PSTN or GSM cellular networks), or over networks that allocate a fixed rate to the elastic connection (e.g., a second generation CDMA cellular network). In this case, shaping of the traffic so as to match the allocated rate should be carried out by the source. Obvious examples would be Internet access over a dial-up line in the PSTN, or a fixed rate connection over a cellular access network being used for Internet access.

Real-Time Stream Traffic

Consider digitized speech emanating from an end-device involved in interactive telephony. This could be a periodic stream of bytes or packets, or, if silence suppression is employed then it could be an on-off stream of bytes or packets. Obviously, this source of traffic has an intrinsic temporal behavior, and this pattern needs to be preserved for faithful reproduction of the speech at the receiver. The network will introduce delay: fixed propagation delay, and, in packet networks, queuing delay that can vary from packet to packet (see Figure 3.2). Playout delay introduced at the receiver (to mitigate the effect of random packet delay variation) will be larger the more variable the packet delay. Hence, the network cannot serve such a source at arbitrary rates, as it could in the case of elastic traffic. In fact, depending on the adaptability of such a real-time stream source, the network may need to reserve bandwidth and buffers in order to provide an adequate transport service to the source. Applications such as real time interactive speech or video telephony are examples of real-time stream sources.

What type of traffic is described as having a high volume of data per packet?

Figure 3.2. A sequence of packets from a voice talk-spurt being transported across a packet network, and then being played out at the receiver after a playout delay. Each source packet contains h seconds of voice, the packet delays are X 1,X2,…, the packets arrive at the receiver at times t1,t2, …, and the playout delay is b. Notice that immediate playout at time t1 would have resulted in the talk-spurt being broken.

The following are the typical QoS requirements of real-time stream sources.

Delay (average and variation) needs to be controlled. Real-time interactive traffic such as that from packet telephony would require tight control of source-to-sink delay; for example, for wide area packet telephony the delay may need to be controlled to less than 200 ms with a probability more than 0.99. Packets that do not conform to the delay bound are considered to be lost.

There is tolerance to data loss. Note that, from the point of view of the receiver, packets can be lost for two reasons: (1) buffer overflows, or unrecovered link losses in the network, or (2) late arrivals at the receiver. Owing to the high levels of redundancy in speech and images, a certain amount of data loss is imperceptible. As an example, for packet voice in which each packet carries 20 ms of speech, and the receiver does lost-packet interpolation, 5 to 10% of the packets can be lost without significant degradation of the speech quality [81], [54]. Because of the delay constraints, the acceptable data loss target cannot be achieved by first losing and then recovering the lost packets; in other words, stream traffic expects the intrinsic loss rate from the packet transport service to be bounded.

Store-and-Forward Stream Traffic

We can distinguish what we have just described as real-time stream traffic from the kind of traffic that is generated by applications such as streaming audio and video. Such applications basically involve a one-way transfer of an audio or video file stored on the disk of a media server. Consider a video stored in a server being played over a network. For example, the computer D or the handheld device C in Figure 3.1 may be used to watch a movie stored in the Server E. In order for the received video to be useful, the playout device should be continuously “fed” with video frames so that it is able to reproduce a smooth video output. This can be achieved by providing a guaranteed rate to the transfer, as would be done, for example, in a CDMA cellular system in the context of the device C. Alternatively, owing to the fact that the transfer is one way, a more economical way is to treat the transfer as elastic, and buffer the video frames as they are received. This would be the approach taken when the video is transferred over the random access WLAN to the computer D. Playout is initiated only after a sufficient number of video frames has been buffered so that a smooth video playout can be achieved in spite of a variable transfer rate across the contention-based WLAN. Note that the same description holds for streaming audio.

Thus, the problem of transporting streaming audio or video becomes just another case of transferring elastic traffic, with appropriate receiver adaptation. Note, however, that the elasticity here is constrained since the average rate at which the network transports the video bit stream must match the rate at which the video has been coded. Simple interactivity, such as the ability to rewind, can also be supported by the receiver storing frames that have already been played out. This, of course, puts a burden on the amount of storage that the playout device needs to have. An alternative is to trade off sophistication at the receiver with the possibility of interactivity across the network; that is, the press of the rewind button stops the video playout, frames stored in the playout device are used to create a rewind effect, and meanwhile additional past frames are fetched from the server. But this would need some delay and throughput guarantees from the network, requiring a service model somewhere in between the elastic and the real-time stream model that we have described earlier. We conclude that the QoS requirements of a store-and-forward stream transfer would be the following:

The average transfer rate provided in the network should match (in fact, should be greater than) the average rate at which the stored media has been encoded.

The transfer rate variability should not be too large.

Thus store-and-forward stream traffic is like stream traffic since it has an intrinsic average rate at which it must be transported, but it does not have strict delay bounds, and hence the network can provide it a time varying transfer rate. In fact, TCP can be used to transport store-and-forward streaming media, provided the average TCP throughput does not drop below the average coded rate of the media. The added benefit of TCP is that it recovers lost packets.

Closed and Open Loop Traffic:

It is appropriate to refer to real-time stream traffic as open loop as it has an intrinsic temporal behavior. Typically, the rate of flow on a connection is determined by the application, and these sources are not controlled by the network. In some systems, a limited amount of controllability is possible, by the sink alerting the source of poor playout quality, to which the source can respond by using a lower bit rate coder. On the other hand, closed loop controls invariably are used when transporting elastic traffic, and, hence, such traffic can be called closed loop. By means of implicit feedback (packet loss) or explicit feedback (control bits in packet headers) the source of the traffic is made to continually adjust its rate of emitting data.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123742544500041

Adaptive Bandwidth Sharing for Elastic Traffic

Anurag Kumar, ... Joy Kuri, in Communication Networking, 2004

7.9

Notes on the Literature

There is a vast amount of literature on protocols, performance analysis, optimization, and distributed algorithms for elastic traffic in packet networks.

The idea of using rate-based control (Section 7.4) for elastic traffic was pursued mostly in the context of the ABR service in ATM networks. The ATM standards for ABR traffic management are provided in [17]. The idea of using a single bit to feed congestion information back to the sources was used in the DECbit protocol in the middle 1980s and was analyzed by Chiu and Jain in [61]. This article appears to be one of the first efforts to formally study a congestion control protocol. Congestion-based one-bit feedback evolved through several improvements. Surveys have been provided by Jain [151], Ohsaki et al. [227], and Fendick [100]. Specific algorithms have been proposed and studied by Newman [223] and by Siu and Tzeng [262]. A delayed differential equation model for queue-length-based source control was proposed by Bolot and Shankar [37]. The stability analysis of such delay differential equations has been provided by Elwalid in [90].

To obtain more precise control over the source rates, attention has moved to explicit-rate-based algorithms. The effort here was to develop distributed asynchronous algorithms that could compute the fair rates and feed them back to the sources. The notion of fairness favored in this research was max-min fairness. The basic principles of max-min fairness have been discussed in Bertsekas's and Gallager's textbook [33]. The early efforts in developing algorithms attempted to solve the problem by implementing the centralized algorithm in a distributed manner; see Charny et al. [55] and Charny and Ramakrishnan [54]. An algorithm that virtually became the benchmark in the ATM Forum was developed by Kalyanaraman et al. [155]. A related approach was developed by Fulton et al. [112]; the algorithm in Lee and De Veciana [193] is similar, but the authors also examine delays and asynchrony. Subsequent approaches were based on control theoretic formulations, such as those reported in Benmohamed and Meerkov [27] and in Kolarov and Ramamurthy [178]. The solution-of-equation technique discussed in Section 7.7 was developed by Abraham and Kumar [3], and the utility optimization approach was developed by Lapsley and Low [190]. In the latter two cases, the resulting distributed algorithms are shown to be equivalent. Abraham's and Kumar's approach for max-min fairness also handles minimum session rate guarantees (see [1] and [2]).

Window protocols have long been used to control the flow of traffic in packet networks. The adaptive window algorithm of TCP was developed primarily with the aim of managing congestion in the Internet. The 1990s saw an explosion in the deployment of the Internet, and in the past decade a huge amount of research has been done on the modeling, analysis, and optimization of the TCP protocol. See the book by Stevens ([273]) for a comprehensive and detailed description of TCP. One of the earliest attempts to formally model and analyze TCP was in Mishra et al. [213], who observed the cyclical evolution of the adaptive window-based mechanism in TCP. Lakshman and Madhow [187] developed this idea to obtain a throughput analysis for long file transfers. Subsequently this approach was used by Kumar [181] and by Kumar and Holtzman [182] to analyze the performance of some versions of TCP over lossy and fading wireless links. A very general approach to modeling TCP performance in the presence of a stationary and ergodic loss process is provided by Altman et al. in [10]. The analysis provided in Section 7.6.5 is a simplification of the analysis in [230] by Padhye et al.; this is famous as the PFTK model, after the names of the four authors. An analysis approach that does not rely on this cyclical behavior has been developed by Sharma and Purkayastha [260]; they incorporate an exogenous open-loop source (such as UDP traffic) and different RTPDs but need to assume that the bottleneck link is fully utilized. Another technique for analyzing TCP connections with different RTPDs is provided by Brown [44].

When the transfers are short it is important to incorporate the effects of connection setup and slow-start. Such analyses have been provided by Cardwell et al. [48] and by Barakat and Altman [21].

The use of random discard to control queues in the Internet and hence to manage TCP performance was proposed as RED by Floyd and Jacobson in [103]. The ECN protocol was proposed by Ramakrishnan and Floyd in [241]. The analysis of TCP connections over a wide area bottleneck link with random drop (see Section 7.6.6) was done by Firoiu and Borden in [101].

The modeling of TCP via delay differential equations led to new insights into the effects of the various parameters on stability. In the context of TCP, the differential equation model appears to have been first used by Hollot et al. in [142]. Subsequently these models have been extended to cover more complex scenarios than only one traffic class and a single link; see [214], [215], and [50].

The performance analysis of TCP-like protocols with randomly arriving, finite-volume transfers has been done mainly for single-link scenarios. Based on extensive measurements, Floyd and Paxson have reported in [104] that the Poisson process is a good model for the arrival instants of user sessions. In [138], Heyman et al. use a processor-sharing model and conclude that the performance would be insensitive to file size distributions. In [31] Berger and Kogan use a similar model but with a finite number of users cycling between sharing the bandwidth and thinking. A processor-sharing model is used by Ben Fredj et al. in [108], but a considerable amount of detail in the user behavior is modeled by using the theory of quasi-reversible queues. The extension of the processor-sharing model was developed by Kherani and Kumar [170]. The observation that when propagation delays are large, the performance becomes insensitive to load for a large range of loads was also made in this paper, and was studied by many simulations in [291].

The observation that traffic in the Internet is long-range-dependent was made by Leland et al. in [195]; further analysis of the data was presented by Willinger et al. in [299]. That the LRD nature of the traffic could be traced to heavy-tailed file transfers was observed by Crovella and Bestavros in [69]. A study of the effect of queueing with LRD traffic has been reported by Erramilli et al. in [94]. The result that an alternating renewal process with heavy-tailed on times is long-range-dependent was proved by Heath et al. in [137]; the argument mentioned in the text was provided by Kumar et al. in [184]. The importance of modeling buffers in the closed loop has been shown by Kherani and Kumar in [171] and in [172]. In these articles, Kherani and Kumar have also analytically established the long-range dependence of the input process into the router buffer of the bottleneck link.

The approach to thinking about these problems in terms of dynamic sharing of the network links to achieve some global sharing objectives began in the context of the ABR service in ATM networks. Several bandwidth-sharing objectives have been proposed, and their implications have been studied; see Roberts and Massoulie [252], Massoulie and Roberts [206], and Bonald and Roberts [39].

The approach to formulating the bandwidth-sharing problem as a utility maximization problem, and then the use of dual variables (interpreted as “shadow” prices) to develop distributed algorithms, originated in the paper by Kelly et al. [166]. In the discussion in Section 7.7 we cover the work of Abraham and Kumar [3] in the context of the max-min fairness objective, and that of Low and Lapsley [201] for more general fairness objectives. The idea of random exponential marking is discussed by Athuraliya et al. in [16]. In [217], Mo and Walrand have developed the result that max-min fair sharing is a limit of fair sharing with a more general utility function; they have also developed distributed algorithms for adaptive window-based transmission. A detailed study of the bandwidth-sharing fairness achieved by TCP-like adaptive window-based algorithms has been provided by Hurley et al. in [144], and by Vojnovic et al. in [292].

The model of a network with randomly arriving transfer requests of i.i.d. volumes of data has been studied for ideal MMF sharing by De Veciana et al. in [80]; for this model, an approximate performance analysis technique is provided by Chanda et al. in [51]. This model has been generalized by Bonald and Massoulie in [38].

In [45], Bu and Towsley provide a fixed-point approximation for obtaining the throughputs of infinite- and finite-volume TCP transfers in a network; the finite-volume transfers are assumed to traverse a single bottleneck link. Another proposal for the modeling and analysis of TCP-controlled finite-volume file transfers has been given by Casetti and Meo [49]; first a detailed model of a TCP session is analyzed, assuming that the RTT is given, and then a model is proposed for obtaining the RTT in a network carrying randomly arriving file transfers. Other modeling and analysis approaches for a fixed number of TCP transfers in a general network are provided by Firoiu et al. in [102] and by Altman et al. in [11].

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124287518500070

Maintenance Decision Support Systems

Diego Galar, Uday Kumar, in eMaintenance, 2017

7.3.4.5.4 Quality of Service

Heterogeneous networks are (by default) multiservice, providing more than one distinct application or service. This implies not only multiple traffic types within the network, but also the ability of a single network to support all applications without QoS compromise (Vera et al., 2009). There are two application classes: throughput and delay-tolerant elastic traffic (e.g., monitoring weather parameters at low sampling rates) and bandwidth and delay-sensitive inelastic (real-time) traffic (e.g., noise or traffic monitoring), which can be further discriminated by data-related applications (e.g., high- vs. low-resolution videos) with different QoS requirements. Therefore, a controlled, optimal approach to serve different network traffics, each with its own QoS needs is required (El-Sayed et al., 2008). It is not easy to provide QoS guarantees in wireless networks, as segments often constitute “gaps” in resource guarantee because of resource allocation and management ability constraints in shared wireless media.

QoS in Cloud computing is another major research area, which will require more attention as the data and tools become available on Clouds. Dynamic scheduling and resource allocation algorithms based on particle swarm optimization are now being developed, but for high capacity applications and as IoT grows, this could become a bottleneck (Gubbia et al., 2013).

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128111536000075

Virtual Path Routing of Elastic Aggregates

Anurag Kumar, ... Joy Kuri, in Communication Networking, 2004

15.1 On-Demand Routing

In practice, prior knowledge of the set of demands may not be available. We consider the example of an Internet service provider (ISP) that has a network deployed and is already providing service to some customers. When the network was being designed (that is, before operations were started), the ISP would possibly have used some predicted traffic values between different source–destination pairs to design the network. Therefore, in this design phase, we see that the complete set of demands that need to be routed across the network can be assumed to be available. But after the network is in operation, additional traffic demands arrive in an unpredictable manner. For example, a large enterprise may, at very short notice, require bandwidth between two locations. Such a requirement is often expressed by saying that the enterprise needs a virtual path across the ISP's network. The virtual path could correspond to an ATM VP, a permanent virtual circuit (PVC) in a frame relay network, or tunnel in an MPLS network (MPLS is discussed in Section 15.4).

We consider the problem of routing such virtual paths across a backbone network. We note that the demands that arrive in this scenario are not microflows corresponding to individual elastic traffic sessions (for example, TCP sessions). Rather, a demand is the average offered load of an aggregate of elastic traffic between two locations of an enterprise (as in the earlier example), or between two enterprises that have a business relationship. In either case, the ultimate source and sink of traffic can be identified as corporate entities. In contrast, in Chapter 14, we had hundreds or thousands of traffic end points, not belonging to any single organization, that were grouped into a single aggregate because they accessed the ISP backbone through the same router.

The ISP has no prior knowledge of when demands for virtual paths will arrive nor what the demand sizes will be. Thus, demands now arrive according to an arbitrary arrival process and must be routed such that the fraction of blocked demands is small and network resources are efficiently utilized. The demand arrival process is fundamentally different from that considered in Chapter 14. In particular, we discuss an offline problem in Chapter 14, but here we consider an online problem, where routing is done on demand.

An additional constraint that often accompanies such demands is that the whole demand must be routed as a single unit across the ISP's network. In other words, the demand is to be considered unsplittable. The requirement may arise because the enterprise asking for a virtual path across the network is often interested in a virtual private network for its users. This means that the users should have the illusion that the enterprise has its own network resources, meant for their exclusive use. This demands some uniformity in the quality of service experienced by individual sessions within the aggregate; for example, average TCP session throughput performance should be the same for all sessions constituting the aggregate demand. To ensure this, an ISP would like to treat the whole aggregate as a single unit and route it across its backbone without splitting it. The reason is that splitting might cause different sessions to encounter different round-trip times (RTTs; see Chapter 7)because they can travel on different routes, and that might result in disparities in session performance as perceived by end users.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012428751850015X

Recent advancements in AC microgrids: a new smart approach to AC microgrid monitoring and control using IoT

P. Madhumathy, Shweta Babu Prasad, in Microgrids, 2022

3.1 Introduction

Electricity plays a very important role for various applications. A transformer is an essential element for transmission and distribution of electric power. It is used for the conversion of input AC voltage to a greater or reduced voltage as output. Monitoring is performed by collecting data online through different measurement techniques from the sensors. Information cannot be obtained through manual monitoring due to various physical drawbacks and excessive heating of transformer. Since these variables might potentially diminish transformer longevity, various approaches are presently being employed for offline and online observation of transformer. The proposed system helps us locate and identify unusual conditions and service the transformer thereby increasing the quality and life of the transformers.

Microgrids (MGs) are electrical structures that integrate distributed load and generation and also combine managing of thermal and electrical load, thermal with electrical storage, or a “smart” interface with the grid, working alongside or separately from it. While functioning parallelly, MGs offer an aggregation of energy, capacity, and other functions to the grid.

MGs work efficiently in numerous ways. They make use of cogeneration to offer service to adjusted electric and thermal loads, to accomplish efficiencies over 80% contrasted with 30%–50% during regular generation. Additionally, sustainable power source permits MGs to attempt proficient and adaptable crossbreed generation activities. Utilizing thermal and electrical capacity to oversee the duration of utilization of power and fuel, MGs assist moderation with driving costs by productively moving burden to times of decreased request and estimating. Building temperatures for the most part move gradually, and by “smart” management of thermal loads, they can adequately utilize structures themselves as thermal stockpiling to oversee load shape. These and comparative proficiency and vitality in addition to the board procedures save money as well as essentially lessen the ecological effect of giving vitality administrations.

MGs in addition to nanogrids give manageable explanations for improved access to areas insufficiently served by conventional electricity grids. They can also provide economic and environmental benefits to these secluded areas. The word nanogrid has been used to identify a small MG. Nanogrids are able to function in both grid-connected and islanded mode. To assess performance and long-term effects on the connected equipment in the nanogrid, lengthy time period measurements of the power quality indices for a nanogrid during insulated operation are needed.

MGs belong to the class of small-scale grids and incorporate DGs (Distributed Generation), units to store energy, linear or nonlinear loads to function in grid-connected or islanded mode. Here, DGs may produce renewable or nonrenewable energy, in which the aspects are connected to power converters. The CIGRE working group C6.22 Microgrid Evolution Roadmap (WG6.22) offer a general interpretation: MGs are power supply models having immense energy systems (e.g.,: dispensed generators, storage units, or administrable loads) to function in such a way that can be operated in a restrained and regulated manner both while being connected to the primary energy network and while islanded. Initially, when the renewable energy sources were proposed, the energy generated via them was not sizeable in evaluation to the massive traditional generators supplying powering to the grid; hence, their influence on the overall administration of the grid was not of much importance. During this period the whole idea was to grant them to produce enormous amounts of energy inject it into the grid by way of the use of their personal algorithms. The predominant conventional mills may want to adjust the small unbalances and fluctuations induced by using these DGs.

Typically, the major communication necessities for a smart or MG are illustrated as follows:

Data rate: It is a vital necessity as only few linked networks furnish the necessary data rate. When a home or industrial area network is considered, the appropriate data rate is <100 kbps compared to WAN requiring >10 Mbps.

Range: There are innumerable DGs, energy storage systems, and clients for an MG all connected to the primary link using inverters to facilitate communication. Reckoning the dimensions and of MG and separation of the units that extend to hundreds of kilometers, it can be concluded that not many networks can satisfy this need.

Security: If there are gaps in the security system, the entire architecture is prone to physical and cyberattacks. This facet needs to be strengthened by implementing appropriate security fixes over the entire architecture thereby improving the security aspect of the MG.

Latency: When the features of the gadget used in the grid is considered, different limits for the latency are considered. When inverter signals are studied, the limit is <10 Ms.

Reliability and scalability.

Researchers’ attempt in the 1980s to connect humans via technology gave forth the prevalent computing area to use embedded systems in businesses carried out on a daily basis. Presently the technology being beyond the normal PC, and using handheld smart devices has made communication and interaction easier and faster. A smart environment is one where there is free and easy usage of sensors, actuators, screens, computing units all embedded logically in items surrounding us and which are a part of our everyday lives all linked to each other in a logical and effortless manner.

To materialize a well-rounded Internet of Things (IoT) architecture, a capable, adequate, economical, dynamic structure is needed. Here cloud computing steps in to proclaim that it provides the necessary infrastructure and technology required to materialize the need of a successful IoT system by providing state-of-the-art and advanced data repositories and other services that provide dependable services.

Sensing and actuating units are linked together to create a unique structure working in a combination to make new and inventive application a successful reality. This accepted common aim is materialized by adopting logical sensing on a big scale, analysis of data and depiction of results by utilizing prevalent sensor technologies in addition to cloud computing.

Security can be compromised when networks are deployed extensively. Attackers can use these methods to harm the network: by impairing availability of network resources, insert wrong information into network, get access to confidential information, and so on. The important units of IoT, namely, RFID (radio frequency identification), WSN (Wireless Sensor Networks), cloud, are susceptible to compromise. Cryptography is the important tool to defend data from these attacks. RFID is most susceptible due to its allowing to be tracked. Cryptographic solutions can be employed to the complicated issues and are prone to deeper study before being prevalent widely. Confidentiality and message authentication codes are encryption tools that protect data by providing data integrity and authenticity. To protect from internal invasion, encryption is not sufficient, other noncryptographic tools must be employed. At regular intervals, novel applications for the sensors have to be installed in addition to updating the old ones that can be carried out remotely and wirelessly. In the old ways of updating and installing, a data dissemination tool was used to circulate code to all hubs void of identity proof leading to gaps in security. While using a protocol that allows updating of all nodes in a secure manner by employing authentication security threats can be avoided. More research needs to be carried out to employ security in the cloud. As cloud deals with financial aspects of IoT in addition to data and tools, it is more vulnerable to attacks. Protection of identity becomes very important in the case of hybrid clouds, which comprises both private and public clouds.

Heterogeneous networks by nature offer multiple services via multiple applications. This characteristic allows different types of traffic and data to be carried in the network by all the applications that offer the different services without there being any changes in Quality of Service (QoS). The different types of applications are as follows:

Throughput and delay tolerant elastic traffic used for observing weather conditions at low sampling rates.

Bandwidth and delay sensitive inelastic (real-time) traffic used to observe noise or traffic.

Hence, a restrained and optimum path needs to be tread to provide service for variety of traffic on the network which have a different QoS need. This is not an easy task to promise certain QoS in wireless networks, as there are parts where the allocation of resources is not uniform. QoS in cloud computing provides a vast area for advanced where main area to be considered is the data and tools. Dynamic scheduling and resource allocation algorithms are developed keeping particle swarm optimization as the basis. As the capacity for different networks and tools grows along with the IoT impediments could be faced.

An embedded system is computer systems committed to definitive functionality and it is comprised as a part of mechanical or electrical system, generally having dynamic computing restraints, and controls various devices. It is embedded as a component and has hardware and mechanical elements. A typical family of significant processors is the digital signal processor and are designed and developed to cut down the price and stature thereby increasing the accuracy and performance. Embedded systems are intended to carry out explicit errands, while a PC performs numerous errands. Some also consist of real-time performance restraints that need to be addressed for safe operation, while others may have low or no performance needs, thereby cutting down costs when hardware is simplified.

Embedded systems are known to possess the functionality listed here:

1.

processing: the capacity of a system to measure and understand the analog/digital signals,

2.

communication: the phenomenon of transmitting signals from one system to another, and

3.

storage: safeguarding the received information on a temporary basis.

Any application that makes the use of an embedded system requires all or a combination of the previous functionalities.

The use of embedded systems is advised when it is necessary to have economical implementation, increased dependency, and superior to ordinary hardware. There is feasibility of a single low-cost microcontroller to compensate for the use of a large number of physical logic gates, timing circuits, input buffers, output drivers, etc. VLSI (very large scale integration) and embedded systems are interdependent and there would be no existence of one without the other.

Real-time operating system and embedded system serve extremely important roles in latest technologies. The former comprises three types of real-time systems having their basis on the timing requirement.

1.

Soft real-time system—This is used only if there is a deterioration in performance just because the particular function was not able to keep up to a certain time limit. The quality of an output is reduced once the time limit is crossed, reducing the QoS of the system in the process.

2.

Hard real-time system—The system fails if the time limit is not satisfied. The hardware or software in this system functions by being restricted to a time limit. The entire system fails if the system does not operate within the time period, for example, pacemaker units, antilock brakes, and aircraft control systems.

3.

Firm real-time system—This is a hybrid of the previous two. An embedded system having hardware and software needs to accurately function to solve a particular problem and have the following characteristics:

a.

The size of the processors has reduced invariably but the performance has remarkably increased.

b.

They consume very less power.

c.

The prices of the processors have reduced to a large extent.

d.

There is better understanding in the masses currently about the hardware unit and that it is more feasible to have a programmable processor in the unit to make the structure increasingly potent with reduced latency time.

e.

Several modeling and experimental tools can be employed to simulate results to reduce the build time and develop prototype.

f.

The use of the common runtime languages like JAVA that are allowed to be used in a variety of applications was not possible in the near past.

g.

With the coming up of multiple integrated software platforms, it is now possible to develop applications in an easier manner.

h.

With the incorporation of embedded systems with the Internet, it is now feasible to make embedded systems a part of networking and be functional in different architectures like local area network (LAN), WAN, and the Internet.

Obtaining and distributing of important information from network via the linked devices and a protected functional layer is the definition of IoT. Simply put, IoT is a set of network devices linked wirelessly to distribute information and set up communication to form latest information and store it for future analysis. IoT is fully functional when it uses units like smart objects that comprise sensors and actuators, and they are constructed to take complete advantage of their networking features for interfacing and fully utilize the freely available Internet sources for interaction on different levels. The linked IoT devices synthesize data on massive scale and to process this is a difficult feat. This hurdle can be overcome by accumulating and analyzing data on a large scale.

It was aimed to step out of the conventional desktop age into an advanced age for computing. Considering the IoT standard, several objects by which we are enveloped will be in some way linked on the network this can be facilitated by RFID and sensor network technologies where data and communication devices are built in such a way so that they are not visible to the human eye. Data are generated in massive forms that need storage, processing, and need to be conferred in an esthetic way to allow for easy interpretation. Cloud computing gives a virtual model to assimilate several monitoring, storage devices, and other tools. The economic solution hence provided assists businesses and clients to use tools to run their business virtually. Smart connectivity with extant networks and context aware computing is an integral part of IoT. As there is advances made in Wi-Fi and 4G-LTE, the advances made in sensing and gathering information from everywhere is indisputable.

A profound development of the present Internet into a web of linked objects is to obtain information from the surroundings (sensing) in addition to cooperating with the hardware units by using the current norms of the Internet to serve for a variety of purposes. As a large number of technologically advanced devices were developed like the Bluetooth, RFID, Wi-Fi, and telephonic data services in addition to sensor and actuator units, IoT has assisted to develop the stationary Internet into an advanced futuristic form. This transformation has led to the communication of people and devices, and Internet revolution led to the interconnection between people and devices at remarkable proportions. Researchers aim to create smart environment by linking objects.

When the entire IoT architecture is being examined, a capable, competent, protected, expandable, and economical structure is needed with appropriate computing and storage components. Cloud computing is an emerging science that ensures stable benefits to be generated via advanced data centers that employ virtual storage units. Here, data are received from sensors placed everywhere, information is inspected and interpreted and also presented esthetically to the user, all this while being hidden from the user.

In the proposed system, different parameters like temperature, pressure, level, humidity, voltage, current, vibration, and movement are recorded by Arduino Atmega connected to a PC or a laptop. The data collected are transmitted to a server that can be located anywhere across the globe via the Internet.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323854634000022

Network Design Problems—Notation and Illustrations

Michał Pióro, Deepankar Medhi, in Routing, Flow, and Capacity Design in Communication and Computer Networks, 2004

2.6 FAIR NETWORKS

We will now consider another yet different type of design problem. So far, we have assumed that the demand volumes are given and the network is required to route these volumes. Consider a network where the demand is elastic. Elasticity means that each demand can consume any bandwidth assigned to its path, perhaps within certain predefined bounds. This assumption corresponds, for instance, to a network with demands generating elastic traffic which can accommodate the changing bandwidth currently assigned to it, such as traffic generated during Internet web sessions.

A general problem with such networks is how and how much of demand volume to assign to each demand so that the capacities of links are not exceeded; to simplify, we will first assume that a fixed path is given for each demand to carry its (elastic) demand volume. In other words, in this problem, we have capacity constraints along the line of those listed in Problem (2.4.15), but no particular value for hd in the demand constraint is assumed, which means that the demand constraint is needed to be satisfied within some predefined bounds. An obvious initial solution could be to assign demand volume on its lower bound if this leads to satisfying the capacity constraints; if not, the problem is not feasible at all. Assuming feasibility, we still want the network to carry more than the lower bound of the demand volume such that some level of fairness among different demands is maintained, raising a question on how to assign volumes to the demands in a fair way. Furthermore, we are interested in understanding the impact on network throughput, which is defined to be the total demand volume carried by the network.

By far, the best known fairness criterion is Max-Min Fairness (MMF), also called equity. In the pure elastic case with no bounds, the first step of obtaining the MMF solution consists in assigning the same maximal volume to all demands, assuring that minimal assignment is maximized. Consider the network in Figure 2.8. It has E = 2 links, V = 3 nodes, and D = 3 demands. The link capacities are c1 = c2 = 1 1/2. Clearly, all demands have exactly one candidate path. Under the MMF principle, the first step of the solution results in the following flow allocation: x11 = x21 = x31 = 3/4. In fact, in this case, we are already at the final solution since all capacities are exhausted. Otherwise, if after the first step, some free capacity is still present, the process of increasing demand volumes for those demands (for which it is possible) would be continued. In this way, the second minimal assignment is maximized and so on. To illustrate this point, suppose we increase the capacity of e = 2 to 2 units. Then, after the first step, we will have 1/2 unit of capacity left on link e = 2. Thus, demand d = 2 (and only this demand) can utilize this unused capacity, thus increasing its assigned demand volume to 1 1/4.

What type of traffic is described as having a high volume of data per packet?

FIGURE 2.8. Two-Link Network

We notice that the MMF solution is fair from the user's viewpoint. On the other hand, if we were to optimize the total throughput in this network, we find that, by returning to the example in Figure 2.8, the maximal throughput is 3 which is achieved with a highly unfair solution x11 = x21 = 1.5 and x31 = 0. A simple but important observation is that what is good for the network may not necessarily be good for all individual users (demand). With the MMF solution, the resulting throughput is equal to x11 + x21 + x31 = 2 1/4.

Clearly, the throughput degradation observed with the MMF rule is due to the fact that the same volume is assigned to every demand whatever the number of links on its path may be (the demands with long paths use up capacity of many links). Hence, a natural question arises whether there is some compromise solution between MMF and throughput maximization that has better throughput than MMF, yet is not as unfair as pure throughput maximization. The answer is yes, and one such fair allocation principle is called Proportional Fairness (PF).

The PF principle uses the revenue objective which consists in maximizing the sum of (natural) logarithms of the volumes assigned to demands, i.e., in our case, to maximize log x11 + log x21 + log x31. The rational behind using the logarithmic function is that it does not allow the assignment of zero volumes to demands (this would cause the revenue to be equal −∞), and at the same time it makes it not profitable to assign too much volume to any demand (think why). Note that the objective function is non-linear, in contrast to the examples discussed so far in this chapter. You may check that the solution for the PF principle is x11 = x21 = 1 and x31 = 1/2 (Exercise 2.7).

From the user's point of view, the PF solution is less fair than the MMF solution: the long flow (i.e., the flow using both links) x31 = 1/2 is smaller than the two short flows (i.e., the flows using one link each) x11 = x21 = 1. However, because of favoring shorter flows, the PF allocation is more efficient in terms of throughput, in this case equal to x11 + x21 +x31 = 2 1/2. This is a general observation: the PF solution does better than the MMF solution in terms of throughput, at the expense of fairness given to the users. Another way to examine this is that addressing fairness can reduce overall throughput. Hence, PF can be viewed as a compromise between throughput maximization and MMF.

From the above example it follows that the capacitated flow allocation problem in the PF case is to find a flow allocation vector x satisfying the capacity constraints (2.4.7) and maximizing the (logarithmic) revenue function:

(2.6.1)R(x)=∑drd logXd,

where Xd = Σpxdp is the total flow allocated to demand d and rd is a positive weight (revenue) associated with demand d. This is a tractable mathematical formulation which can be solved, for instance, by introducing a linear approximation of the logarithmic function and solving the resulting linear programming problem.

Solving the MMF capacitated allocation problem is more complicated. In general, it is just not enough to find a flow allocation vector x which maximizes the minimal (over all demands) total flow Xd assigned to all demands; if such a vector x is found, then, in general, some link capacity will still be free and can be used for increasing flow allocations for at least a subset of demands.

To illustrate this point, consider the network depicted in Figure 2.9. Applying solution x1 leaves room for further increase in flow for demand d = 1, leading to the final (optimal) MMF allocation vector:

What type of traffic is described as having a high volume of data per packet?

FIGURE 2.9. Multiple Optimal Allocation Vectors

x11=1,P11={2}  x12=1,P12={1,3}x21=1,P21={1,4}

Applying solution x2 does not leave room for any further improvement.

The final total allocation vector X = (X1, X2) is equal to (2, 1), and has the crucial property that when sorted, it is lexicographically maximal in the set of all feasible sorted total allocation vectors. Recall that a sorted vector X = (X1, X2, …, XD) is lexicographically greater than another sorted vector Z = (Z1, Z2, …, ZD) if there exists d, 0 ≤ d < D, such that Xi = Zi for i = 1, 2,…, d and Xd+1 > Zd+1. Notice that it is not a straightforward task to find the MMF solution vector, since, as in our example, we do not know in advance which of the possible solutions from the first step leads to the final solution (in general, there can be many steps to perform).

The uncapacitated counterparts of the above presented capacitated allocation problems are also of interest, especially in the PF case. Chapter 8 covers fair network design in depth.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780125571890500044

Mobile fronthaul transport options in C-RAN and emerging research directions: A comprehensive study

Dawit Hadush Hailu, ... Gebremichael T. Tesfamariam, in Optical Switching and Networking, 2018

2 C-RAN system architecture

Fig. 2 presents a simplified overall C-RAN architecture. The concept from which C-RAN moves is the splitting of traditional BSs into RRH and BBU. It introduces centralized, collaborative, and cloud and clean system, to optimize cost and energy consumption in the field of mobile networks [10,11]. Furthermore, it centralizes the BBU processing resource together so that the resource could be managed and allocated dynamically on demand. Moving some parts of the radio network control function from being collocated with the antenna at the cell site to the location deeper into the network introduces a new transmission network into the overall network infrastructure MFH. This new C-RAN architecture offers many benefits to operators such as in controlling operational cost, energy saving, improved spectral efficiency and resource efficiency, greatly increases the flexibility of the network, and facilitation of service on edge.

What type of traffic is described as having a high volume of data per packet?

Fig. 2. Base station architecture for CRAN with RRH [6].

As it can be seen in Fig. 2, the overall architecture of C-RAN consists of three main parts: BBU pool, RRH consisted of antenna system and fronthaul network. The RRH part consists of antenna system, power amplifier and A/D converter for analog processing. Since most of the baseband processing is conducted in BBU pool, these modules are less complex and consumes low energy. On the other hand, BBU conducts the baseband signal processing and is connected to RRH by means of an optical fiber or other transport technologies. It comprises of an element capable of scheduling and processing the incoming signals from different cells. Finally, the third part pf C-RAN is fronthaul network; a link between RRH and BBU. The transmission over the fronthaul network can be done using Common Public Radio Interface (CPRI) and Open Base Station Architecture Initiative (OBSAI) protocols.

The coexistence of broadband communication services, including e-banking, e-health and e-learning [12], D2D communication will be integrated in future everyday life. Thus, the C-RAN should be designed towards a highly integrated system to meet the ever-growing demand of applications and services. Old optimization techniques such as cooperative communication and interference management might be utilized. Network densification, user-behaviour study [13], cognitive networks [14,15], Software Defined Networking (SDN), and intelligent wireless backhauling [1] are also important solutions. An important means of offloading the traffic in 5G network is caching at the network edge [12]. The size of cache has a significant impact on the energy efficiency of single-tier networks [16,17] while in heterogeneous networks, a lot of work has been done on content caching using various algorithms and schemes [18–20]. The authors [21] believe that the C-RAN, Heterogeneous C-RAN (H-CRAN), and Fog Radio Access Network (F-RAN) are considered as emerging technologies that are capable of fulfilling the challenging goals of 5G system. Specifically, they address the technology adopted to access the radio network.

2.1 Requirements and challenges of C-RAN for CPRI transport

The requirements of C-RAN architectures featuring a fronthaul network depend on the BBU and RRH separation distance. Thus, the amount of latency introduced to the fronthaul network must carefully be considered, and the actual timing performance also needs to be accurately determined. As a result, the fronthaul network of C-RAN architecture imposes very stringent requirements. And also, there are a number of challenges when a single BBU is shared by a number of RRH. The C-RAN requirements and challenges are described thoroughly below.

2.1.1 Capacity

Traffic flows in the C-RAN network have a very high bitrate, usually on the order of units to tens Gbit/s for a cell site, resulting from the aggregation of flows from one or more antennas. The CPRI data rate depends on the Radio Access Technology (RAT), carrier bandwidth, and MIMO implementation [22]. It can go from 614.4 Mbits/s up to 10.17 Gbit/sec. For instance, for LTE-A with 4 × 4 MIMO and 20 MHz carrier require approximately 4.915 Gbit/s CPRI rate per sector. Similarly, for one sector LTE with 20 MHz and 2 × 2 MIMO the CPRI rate is 2.457 Gbits/s. Such very high data rates results in a fully non-elastic traffic when time varying traffic load condition of a cell is considered. For traditional RAN, these features are the major challenges as they are designed to transport much lower bitrates.

2.1.2 Latency

The fronthaul and CPRI/OBSAI protocols requires very stringent latency requirement which is often the overall limiting factors how far the fronthaul network can extend. It is a key factor that also depends on heavily on the MFH technology to be used. Specifically, this is a case for fast transportation and high capacity mobile networks with the higher speed CPRI/OBSAI options. Hence, the designing RAN and adding functionality to the MFH needs special consideration such as selection of MFH technology.

As a consequence of decoupling the traditional BS functions into a centralized BS that will be shared among multiple RRHs (i.e. fronthaul network), strict timing conditions between BS and RRH have been specified by RAT standards. Delay contributions due to the transportation of fronthaul signals along RAN network, “fronthaul latency”, is one of the main significant challenges for the BBU-RRH design and has a relevant impact on the total latency in the BS.

In general, given that the total latency is fixed by standard, as shown in the Table 1 and the internal processing delays of both BBU and RRH depends on the specific software and hardware implementation, there is a standard upper limit on the latency of fronthaul network. The latency can be further categorized into two parts: namely 1) latency due to the adaptation of fronthaul signals into the RAN infrastructure services, which can be caused by the technology used such as CPRI and OBSAI transmission/reception interfaces, and other functions required by optional layer transport technologies (i.e. multiplexing/demultiplexing, buffering, reframing and error correction), and 2) latency due to the contribution of signal propagation along the RAN. The second contribution imposes a limitation on the maximum geographical distance between BBU hostel and the controlled cell sites.

Table 1. Latency, PDV and synchronization requirements for Ethernet Fronthaul with symmetry assumption.

PropertiesValuesSourcesLatency budget(BBU to RRH, including fiber length, PDV, bridged delays)50 µs (for data rate 110 Gbit/s)[23]Latency budget (excl. cable, BBUto RRH)5 µs (for data rate 110 Gbit/s)[23]Maximum Frequency Error contribution2 ppb[24]Maximum. Bit Error Ratio10–12[23,25]Maximum End to End Latency/RTT(including fiber length, PDV, bridged delays)100 µs–400 µs (250 μs for optical networks)[23,26]Maximum. PDV5 μ or 10% of E2E latency[27]Geographical distance between RRH and BBUless than 20 km (current working assumption) and 25 km for optical networks[28,29]PLRcaused by bit error, congestion.. etc10−6–10−9[27]

2.1.3 Synchronization and jitter

Baseband radio signals are very sensitive to mismatch between transmitter and receiver clock frequencies, bit errors, and variation in clock phase (jitter). The synchronization mechanisms in traditional BS and C-RAN BS are different. Traditional BS feeds a single clock generator to all-in-one BS, whereas in C-RAN BBU is responsible not only to transmit the baseband data in both directions, but also the clock signal generated at the BBU to RRH. In order to reduce the overlap of signals coming from different BS operating in different frequency and meet the requirements imposed by RAT standards on accuracy and stability of radio interface, proper synchronization (keeping the carrier frequency sharp) is needed for mobile network operation. For successful Time Division Duplex (TDD) network operation, RRH clock is synchronized to the bit clock of the received CPRI signal. It needs to follow time frames precisely so that the downlink and uplink frames don′t overlap. The precision of frequency can be changed if some jitter affects the CPRI signal.

There are two types of synchronization methods in fronthaul [26]: 1) Frequency synchronization, with which the time between two rising edges of the clock match, and 2) phase synchronization, with which the rising edge must happen in the same time. Table 2 summarizes the timing requirement for LTE BSs for various network features.

Table 2. Timing requirements for BS, extracted from  [30].

FeatureFrequencyTimeLTE-A FDD±50 ppm (wide area BS) ±100 ppb (Local area BS) ±250 ppb (home BS)–LTE-A TDD±5 µs (cell with raduis &gt; 3 km), ±1.5 µs (cell with radius ≤3 km)MIMO≤65 nseICIC±1 μsCoMP±1.5 μsCPRI±2 ppb±16.6276 ns

Generally, fronthaul networks have strict synchronization requirements in terms of maximum tolerable Bit Error Rate (BER), frequency error contribution and phase synchronization as defined in Table 2. According to CPRI specification, as an example, the BER must be at most 10−12 and the jitter introduced by the CPRI link can contribute quantity not greater than 2 ppb to the frequency accuracy budget.

As explained earlier Ethernet-based fronthaul network employs statistical multiplexing in order to efficiently utilize the network capacity by employing SM technology. However, it imposes high bandwidth-delay product and packet delay variation on the traffic thus, buffering is needed for handling the packet delay variation of traffic. Although the high throughput is highly welcomed by data-centric technologies, the delay and PDV characteristics are bottlenecks for transporting real-time applications. When such networks are used for transporting packetized streams of continuous media, receiver synchronization is required. It is aimed to smooth the transmission packet delay variation between packets arrivals for a given stream. This transmission packet delay variation/jitter can be solved at the receiver by using playout buffers [31] as shown in Fig. 3. In such approach, the buffers introduce additional delay in order to produce a playout schedule that meets the synchronization requirements. The playout buffer is used for packets whose scheduled playtime is in the future. And, it simply replays packets according to the frequencies of packet streams. In such operation, the jitter can be removed at the expense of increased delay.

What type of traffic is described as having a high volume of data per packet?

Fig. 3. Packet stream synchronization.

Read moreNavigate Down

View article

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1573427718300262

A survey on high-speed railway communications: A radio resource management perspective

Shengfeng Xu, ... Zhangdui Zhong, in Computer Communications, 2016

4.1 Admission control

Admission control is an essential tool for congestion control and QoS provisioning by restricting the access to network resources. Generally, the admission control function has two considerations: the remaining network resources and the QoS guarantees. In an admission control mechanism, a new access request can be accepted if there are adequate free resources to meet its QoS requirement without violating the committed QoS of the accepted accesses. Thus, there is a fundamental tradeoff between the QoS level and the resource utilization. To solve this tradeoff, admission control has been extensively studied in common communication networks, and different aspects of admission control design and performance analysis have been surveyed in [66]. However, the admission control problem in HSR wireless communications is more sophisticated due to the following reasons.

High mobility brings some challenges to the implementation of admission control schemes, such as the fast-varying fading channel and frequent handover. The fast-varying characteristics of the wireless channel will cause channel estimation error, and further lead to admission control schemes inaccurate. For frequent handover, since available handover time is short and handover connections almost arrive in batches, it is critical to consider the handover connections when implementing admission control.

Multiple services with heterogeneous QoS requirements are supported on the train. The prioritization and fairness among different services should be taken into account in assigning admission control schemes. For example, higher priority should be given to safety-related services while the other services are delivered using the remaining network sources.

The above reasons make it difficult to directly apply the common admission control schemes into HSR communications. The investigation on novel and proper admission control schemes is of great importance. In the following, we provide an overview of the existing research on admission control for HSR wireless communications, including the classification, the detailed description, and the comparisons of different admission control schemes.

What type of traffic is described as having a high volume of data per?

Explanation: Video traffic tends to be unpredictable, inconsistent, and bursty compared to voice traffic. Compared to voice, video is less resilient to loss and has a higher volume of data per packet.

What type of traffic is described as having a high volume of data per packet voice data video navigation bar?

Video traffic is a high volume traffic that is not as sensitive as voice traffic. Because video traffic is not real time generally. It can tolerate packet loss and delays. And delay on the packets do not cause any misunderstanding.

What type of traffic is described as predictable and smooth?

Voice traffic is predictable and smooth, as shown in the figure. However, voice is very sensitive to delays and dropped packets; there is no reason to re-transmit voice if packets are lost. Therefore, voice packets must receive a higher priority than other types of traffic.