The concept of a hypervisor is the most common basis for classifying approaches

Cloud Computing Uncovered: A Research Landscape

Mohammad Hamdaqa, Ladan Tahvildari, in Advances in Computers, 2012

4.6.2 Virtualization Techniques

There are several techniques for fully virtualizing hardware resources and satisfying the virtualization requirements (i.e., Equivalence, Resource control, and Efficiency) as originally presented by Popek and Goldberg [74]. These techniques have been created to enhance performance, and to deal with the flexibility problem in Type 1 architecture.

Popek and Goldberg classified the instructions to be executed in a virtual machine into three groups: privileged, control sensitive, and behavior sensitive instructions. While not all control sensitive instructions are necessarily privileged (e.g., x86). Goldberg’s Theorem 1 mandates that all control sensitive instructions must be treated as privileged (i.e., trapped) in order to have effective VMMs.

Depending on the virtualization technique used, hypervisors can be designed to be either tightly or loosely coupled with the guest operating system. The performance of tightly coupled hypervisors (i.e., OS assisted hypervisors) is higher than loosely coupled hypervisors (i.e., hypervisors based on binary translation). On the other hand, tightly coupled hypervisors require the guest operating systems to be explicitly modified, which is not always possible. One of the Cloud infrastructure design challenges is to have hypervisors that are loosely coupled, but with adequate performance. Having hypervisors that are operating system agnostic increases system modularity, manageability, maintainability, and flexibility, and allows upgrading or changing the operating systems on the fly. The following are the main virtualization techniques that are currently in use:

(a)

Binary translation and native execution: This technique uses a combination of binary translation for handling privileged and sensitive instructions5[74], and direct execution techniques for user-level instructions [74]. This technique is very efficient both in terms of performance and in terms of compatibility with the guest OS, which does not need to know that it is virtualized. However, building binary translation support for such a system is very difficult, and results in significant virtualization overhead [75].

(b)

OS assisted virtualization (paravirtualization): In this technique, the guest OS is modified to be virtualization-aware (allow it to communicate through hypercalls with the hypervisor, so as to handle privileged and sensitive instructions). Because modifying the guest OS to enable paravirtualization is easy, paravirtualization can significantly reduce the virtualization overhead. However, paravirtualization has poor compatibility; it does not support operating systems that cannot be modified (e.g., Windows). Moreover, the overhead introduced by the hypercalls can affect performance under heavy workloads. Besides the added overhead, the modification made to the guest OS, to make it compatible with the hypervisor, can affect system’s maintainability.

(c)

Hardware-assisted virtualization: As an alternative approach to binary translation and in an attempt to enhance performance and compatibility, hardware providers (e.g., Intel and AMD) started supporting virtualization at the hardware level. In hardware-assisted virtualization (e.g., Intel VT-x, AMD-V), privileged and sensitive calls are set to automatically trap to the hypervisor. This eliminates the need for binary translation or paravirtualization. Moreover, since the translation is done on the hardware level, it significantly improves performance.

Cloud providers have utilized different virtualization platforms to build their datacenters. The trend in the Cloud is to use platforms that combine paravirtualization and hardware-assisted virtualization to benefit from the advantages of both. The most notable VMM platforms are VMware, Xen, Hyper-V, and KVM. All these platforms use the Type 1 hypervisor based architecture. However, while VMware uses the direct driver model to install the hypervisor on bare-metal, the others use the indirect driver model. Moreover, all of these platforms support paravirtualization and full binary translation virtualization. Unlike in VMware, Xen, and Hyper-V the use of hardware-assisted virtualization, is mandatory in KVM. The reader can refer to [76] for a full comparison between these platforms.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123965356000028

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Full Virtualization

Full virtualization is a virtualization technique used to provide a VME that completely simulates the underlying hardware. In this type of environment, any software capable of execution on the physical hardware can be run in the VM, and any OS supported by the underlying hardware can be run in each individual VM. Users can run multiple different guest OSes simultaneously. In full virtualization, the VM simulates enough hardware to allow an unmodified guest OS to be run in isolation. This is particularly helpful in a number of situations. For example, in OS development, experimental new code can be run at the same time as older versions, each in a separate VM. The hypervisor provides each VM with all the services of the physical system, including a virtual BIOS, virtual devices, and virtualized memory management. The guest OS is fully disengaged from the underlying hardware by the virtualization layer.

Full virtualization is achieved by using a combination of binary translation and direct execution. With full virtualization hypervisors, the physical CPU executes nonsensitive instructions at native speed; OS instructions are translated on the fly and cached for future use, and user level instructions run unmodified at native speed. Full virtualization offers the best isolation and security for VMs and simplifies migration and portability as the same guest OS instance can run on virtualized or native hardware. Figure 1.5 shows the concept behind full virtualization.

The concept of a hypervisor is the most common basis for classifying approaches

Figure 1.5. Full Virtualization Concepts

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Internet of Things: an overview

F. Khodadadi, ... R. Buyya, in Internet of Things, 2016

1.4.1 Resource partitioning

The first step for satisfying resource provisioning requirements in IoT is to efficiently partition the resources and gain a higher utilization rate. This idea is vastly used in cloud computing via virtualization techniques and commodity infrastructures, however, virtual machines are not the only method for achieving the aforementioned goal. Since the hypervisor, that is responsible for managing interactions between host and guest VMs, requires a considerable amount of memory and computational capacity, this configuration is not suitable for IoT, where devices often have constrained memory and processing power. To address these challenges, the concept of Containers has emerged as a new form of virtualization technology that can match the demand of devices with limited resources. Docker (https://www.docker.com/) and Rocket (https://github.com/coreos/rkt) are the two most famous container solutions.

Containers are able to provide portable and platform-independent environments for hosting the applications and all their dependencies, configurations, and input/output settings. This significantly reduces the burden of handling different platform-specific requirements when designing and developing applications, hence providing a convenient level of transparency for applications, architects, and developers. In addition, containers are lightweight virtualization solutions that enable infrastructure providers to efficiently utilize their hardware resources by eliminating the need for purchasing expensive hardware and virtualization software packages. Since containers, compared to VMs, require considerably less spin-up time, they are ideal for distributed applications in IoT that need to scale up within a short amount of time.

An extensive survey by Gu et al. [34] focuses on virtualization techniques proposed for embedded systems and their efficiency for satisfying real-time application demands. After explaining numerous Xen-based, KVM-based, and microkernel-based solutions that utilize processor architectures such as ARM, authors argue that operating system virtualization techniques, known as container-based virtualization, can bring advantages in terms of performance and security by sandboxing applications on top of a shared OS layer. Linux VServer [35], Linux Containers LXC, and OpenVZ are examples of using OS virtualization in an embedded systems domain.

The concept of virtualized operating systems for constrained devices has been further extended to smartphones by providing the means to run multiple Android operating systems on a single physical smartphone [36]. With respect to heterogeneity of devices in IoT, and the fact that many of them can leverage virtualization to boost their utilization rate, task-grain scheduling, which considers individual tasks within different containers and virtualized environments, can potentially challenge current resource-management algorithms that view these layers as black box [34].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053959000010

Virtualization Challenges

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Blue Pill

A “blue pill” is the codename for a very controversial rootkit, which was researched and presented by Polish security specialist Rutkowska in 2006. A blue pill is a special type of malware that utilizes the virtualization techniques of certain CPUs to execute as a hypervisor. It induces as a virtual platform on which the entire OS runs. It is capable of examining the entire state of the machine and allowing the perpetrator full privilege while the OS thinks it is running directly on physical hardware. The basic idea behind the blue pill is that a piece of malware that also was a VM monitor (VMM) could be created, take over the host OS and be undetected by remaining within the VMM. There are some issues with this theory, including the following:

1.

Having a VMM take over a host OS would be very difficult.

2.

The malware would have to prevent the OS from being able to detect that it was now a VM.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000096

Mobility Management

Stefan Rommer, ... Catherine Mulligan, in 5G Core Networks, 2020

7.5 N2 management

In EPS, when a UE attaches to EPC and is assigned a 4G-GUTI, the 4G-GUTI is associated to a specific MME and if there is a need to move the UE to another MME the UE needs to be updated with a new 4G-GUTI. This may be a drawback e.g. if the UE is using some power saving mechanism or if a large amount of UEs are to be updated at the same time. With 5GS and N2 there is support for moving one or multiple UEs to another AMF without immediately requiring updating the UE with a new 5G-GUTI.

The 5G-AN and the AMF are connected via a Transport Network Layer that is used to transport the signaling of the NGAP messages between them. The transport protocol used is SCTP. The SCTP endpoints in the 5G-AN and the AMF sets up SCTP associations between them that are identified by the used transport addresses. An SCTP association is generically called a Transport Network Layer Association (TNLA).

The N2 (also called NG in RAN3 specifications e.g. 3GPP TS 38.413) reference point between the 5G-AN and the 5GC (AMF) supports different deployments of the AMFs e.g. either

(1)

an AMF NF instance which is using virtualization techniques such that it can provide the services toward the 5G-AN in a distributed, redundant, stateless, and scalable manner and that it can provide the services from several locations, or

(2)

an AMF Set which uses multiple AMF NF instances within the AMF Set and the multiple AMF Network Functions are used to enable the distributed, redundant, stateless, and scalable characteristics.

Typically, the former deployment option would require operations on N2 like add and remove TNLA, as well as release TNLA and rebinding of NGAP UE association to a new TNLA to the same AMF. The latter meanwhile would require the same but in addition requires operations to add and remove AMFs and to rebind NGAP UE associations to new AMFs within the AMF Set.

The N2 reference point supports a form of self-automated configuration. During this type of configuration the 5G-AN nodes and the AMFs exchange NGAP information of what each side supports e.g. the 5G-AN indicates supported TAs, while the AMF indicates supported PLMN IDs and served GUAMIs. The exchange is performed by the NG SETUP procedure and, if updates are required, the RAN or AMF CONFIGURATION UPDATE procedure. The AMF CONFIGURATION UPDATE procedure can also be used to manage the TNL associations used by the 5G-AN. These messages (see Chapter 14 for a complete list of messages) are examples of non-UE associated N2 messages as they relate to the whole NG interface instance between the 5G-AN node and the AMF utilizing a non-UE-associated signaling connection.

UE-associated services and messages are related to one UE. NGAP functions that provide these services are associated with a UE-associated signaling connection that is maintained for the UE. The Fig. 7.4 shows NG-AP instances in the 5G-AN and in the AMFs, and these may be either non-UE associated, or UE associated. The NG-AP communication uses the N2 Connection with a TNLA as transport (SCTP is used as transport protocol, see Chapter 14).

The concept of a hypervisor is the most common basis for classifying approaches

Fig. 7.4. N2 reference point with TNLA as transport.

The N2 reference point supports multiple TNL associations per AMF (up to 32). TNL associations can be added or removed based on the need and weight factors that can be used to steer to the TNL associations the 5G-AN will use for NGAP communication to the AMF (i.e. achieving load balancing and re-balancing of TNL associations between the 5G-AN and the AMF).

For a specific UE in CM-CONNECTED state the 5G-AN node (e.g. gNB) maintains the same NGAP UE-TNLA-binding (i.e. use the same TNL association and same NGAP association for the UE) unless explicitly changed or released by the AMF.

The AMF may change the TNL association used for a UE in CM-CONNECTED state at any time, e.g. by responding to an N2 message from a 5G-AN node using a different TNL association. This can be beneficial when the UE context is moved to another AMF within the AMF Set.

The AMF may also command the 5G-AN node to release the TNL association used for a UE in CM-CONNECTED state while maintaining user plane connectivity (N3) for the UE at any time. This can be useful if there is a need to change the AMF within the AMF Set (e.g. when the AMF is to be taken out of service) at a later date.

7.5.1 AMF management

The 5GC, including N2, supports the possibility to add and remove AMFs from AMF Sets. Within 5GC, the NRF is updated (and DNS system for interworking with EPS) with new NFs when they are added, and the AMF's NF profile includes which GUAMI(s) the AMF handles. For a GUAMI there may also be one or more backup AMF registered in the NRF (e.g. to be used in case of failure or planned removal of an AMF).

A planned removal of an AMF can be done either through the AMF storing the registered UEs' contexts in a UDSF (Unstructured Data Storage Function), or with the AMF deregistering itself from the NRF, in which case the AMF notifies the 5G-AN that the AMF will be unavailable for processing transactions for the GUAMI(s) configured on this AMF. Additionally, the AMF can initially decrease the load by changing the weight factor for the AMF toward the 5G-AN, e.g. setting it to zero, causing the 5G-AN to select other AMFs within the AMF Set for new UEs entering the area.

If the AMF indicates it is unavailable for processing transactions, the 5G-AN then selects a different AMF within the same AMF Set. If that is not possible, the 5G-AN selects a new AMF from another AMF Set. It is also possible for the AMF to control the selection of new AMF within the AMF Set by the AMF indicates a rebinding of the NGAP UE associations to an available TNLA on a different AMF in the same AMF Set.

Within 5GC, the NRF notifies CP NFs – that have subscribed – that the AMF identified by GUAMI(s) will be unavailable for processing transactions. The CP NF then selects another AMF within the same AMF Set, and the new AMF retrieves the UE context from the UDSF and the new AMF updates the UE with a new 5G-GUTI and peer CP NFs with the new AMF address information.

If there is no UDSF deployed in the AMF Set, then a planned removal of an AMF is done in a similar way as with a UDSF, but with the difference that the AMF can forward registered UE contexts to target AMF(s) within the same AMF set. A backup AMF info can be sent to other NFs during first interaction.

An AMF removal may also be unplanned i.e. due to some failure. To automatically recover to another AMF, then UE contexts can be stored in the UDSF, or per GUAMI granularity in other AMFs (serving as backup AMF for the indicated GUAMI).

7.5.2 5GC assistance for RAN optimizations

As the UE context information is not kept in the NG-RAN when the UE transition to RRC-IDLE, it may be hard for the NG-RAN to optimize the logic related to the UE as UE specific behavior is unknown unless the UE has been in RRC-CONNECTED state for some time. There are NG-RAN specific means to retrieve such UE information e.g. UE history information can be transferred between NG-RAN nodes. To further facilitate an optimized decision in NG-RAN e.g. for UE RRC state transition, CM state transition decision and optimized NG-RAN strategy for RRC-INACTIVE state, the AMF may provide 5GC assistance information to NG-RAN.

5GC has a better method to store UE related information for a longer time and a means to retrieve information from external entities through external interfaces. When calculated by the 5GC (AMF) the algorithms used and related criteria, and the decision when it is considered suitable and stable to send to the NG-RAN are vendor specific. Therefore, along with the assistance information sent to NG-RAN, it often is accompanied with the information whether it is derived by statistics or retrieved via subscription information (e.g. set by agreements or via an API).

5GC assistance information is divided into 3 parts:

-

Core Network assisted RAN parameters tuning;

-

Core Network assisted RAN paging information;

-

RRC Inactive Assistance Information.

Core Network assisted RAN parameters tuning provides the NG-RAN with a way to understand UE behavior so as to optimize NG-RAN logic e.g. how long to keep the UE in specific states. Besides the content listed in Table 7.4, the 5GC also provides the source of the information e.g. if it is subscription information or derived based on statistics.

Table 7.4. Overview of CN assistance information sent to NG-RAN

Assistance informationInformationValuesDescription
Core Network assisted RAN parameters tuningExpected activity period INTEGER (seconds) (1…30 | 40 | 50 | 60 | 80 | 100 | 120 | 150 | 180 | 181, ...) When set to 181, then expected activity time longer than 181.
Expected Idle Period INTEGER (seconds) (1…30 | 40 | 50 | 60 | 80 | 100 | 120 | 150 | 180 | 181, ...) When set to 181, then expected idle time longer than 181.
Expected HO Interval ENUMERATED (sec15, sec30, sec60, sec90, sec120, sec180, long-time, …) The expected time interval between inter NG-RAN node handovers.
If “long-time” is included, the interval between inter NG-RAN node handovers is expected to be longer than 180 seconds.
Expected UE Mobility ENUMERATED (stationary, mobile, ...) Indicates whether the UE is expected to be stationary or mobile
Expected UE Moving Trajectory List of Cell Identities and Time Stayed in Cell (0…4095 seconds) Includes list of visited and non-visited cells.
Time Stayed in Cell included for visited cells (if longer than 4095 seconds, 4095 is set)
RRC Inactive Assistance InformationUE Identity Index Value 10 bits of UE's 5G-S-TMSI Used by the NG-RAN node to calculate the Paging Frame
UE Specific DRX ENUMERATED (32, 64, 128, 256, …) Input to derive DRX as specified in 3GPP TS 38.304
Periodic Registration Update Timer BIT STRING (SIZE(8)) Bits used to derive a timer value as defined in 3GPP TS 38.413
MICO Mode Indication ENUMERATED (true, …) Whether the UE is configured with MICO mode
TAI List for RRC Inactive List of TAIs List of TAIs corresponding to the RA
Assistance Data for PagingRecommended Cells for Paging List of Cell Identities and Time Stayed in Cell (0…4095 seconds) Includes list of visited and non-visited cells.
Time Stayed in Cell included for visited cells (if longer than 4095 seconds, 4095 is set)
Paging Attempt Count INTEGER (1…16, ...) Increased by one at each new paging attempt
Intended Number of Paging Attempts INTEGER (1…16, …)
Next Paging Area Scope ENUMERATED (same, changed, …) Indicates whether the paging area scope will change or not at next paging attempt.
Paging Priority ENUMERATED (8 values) Can be used for Priority handling of paging in times of congestion at NG-RAN

Core Network assisted RAN paging information complements the Paging Policy Information (PPI) and QoS information associated to the QoS Flows (see Chapter 9 for more detail on QoS) as well as assisting the NG-RAN to formulate a NG-RAN paging policy. Core Network assisted RAN paging information also contains Paging Priority that provides NG-RAN with a way to understand how important the downlink signaling is. If RRC Inactive is supported and the UE is not required to be in RRC-CONNECTED state e.g. for tracking purposes, the AMF provides RRC Inactive Assistance Information to the NG-RAN, to assist the NG-RAN in managing the usage of RRC Inactive state. Table 7.4 provides an overview of the various CN assistance information, but as part of the N2 messages sent by the 5GC to the NG-RAN further information is available that can also be used by the NG-RAN to optimize its behavior i.e. an interested reader is referred to 3GPP TS 38.413 for a complete set of information.

7.5.3 Service Area and Mobility Restrictions

Mobility Restrictions enables the network, mainly via subscriptions, to control the Mobility Management of the UE as well as how the UE accesses the network. Similar logic as used in EPS is applied in 5GS, but with some new functionality added as well. The 5GS supports the following:

RAT restriction:

Defines the 3GPP Radio Access Technology(ies) a UE is not allowed to access in a PLMN and may be provided by the 5GC to the NG-RAN as part of the Mobility Restrictions. The RAT restriction is enforced by the NG-RAN at connected mode mobility.

Forbidden Area:

A Forbidden Area is an area in which the UE is not permitted to initiate any communication with the network for the PLMN.

Core Network type restriction:

Defines whether UE is allowed to access to 5GC, EPC or both for the PLMN.

Service Area Restriction:

Defines areas controlling whether the UE is allowed to initiate communication for services as follows:

Allowed Area: In an Allowed Area, the UE is permitted to initiate communication with the network as allowed by the subscription.

Non-Allowed Area: In a Non-Allowed Area a UE is “service area restricted” meaning that neither the UE nor the network is allowed to initiate signaling to obtain user services (both in CM-IDLE and in CM-CONNECTED states). The UE performs mobility related signaling as usual, e.g. mobility Registration updates when moving out of the RA. The UE in a Non-Allowed Area replies to 5GC initiated messages, which makes it possible to inform the UE that e.g. the area is now Allowed.

RAT, Forbidden Area and Core Network type restrictions works similarly as they do in EPS, but Service Area Restriction is a new concept. As mentioned previously, it was developed to better support use cases that would not require full mobility support. One use case is support of Fixed Wireless Access subscriptions e.g. the user is given a modem or device that supports 3GPP access technologies, but the subscription restricts the usage to a certain area e.g. the users home. The UE is provided with a list of TAs that are indicated as an Allowed Area or a Non-Allowed Area. To dynamically shape the area of the user's home (to become the Allowed Area), the subscription may also contain a number indicating maximum number of allowed TAs. When the UE registers to the network, the TAs to become the Allowed Area is derived while the UE moves around. If the UE de-registers and re-registers, then the Allowed Area is re-calculated. The Service Area Restriction is set in the subscription, but it is also sent to the PCF allowing the PCF to decrease a Non-Allowed Area or increase an Allowed Area e.g. when the user agreed to some sponsoring of access to the network in an area.

As an example of how a subscription for an Allowed Area with a maximum number of allowed TAs can be used is shown in Fig. 7.5, which shows a building which crosses TA1, TA2 and TA4. When the user agrees to a subscription limited to the area of its home, the operator determines a TA to use for the subscription (e.g. TA2 which is determined based on the user's home address) and may in addition add a maximum number of allowed TAs, e.g. set to three, as to ensure that the complete area of the building is covered. When the user registers at home e.g. in TA1, the UE may get a RA of TA1 and an Allowed Area of TA1 and TA2 as a reply to the Registration. The network keeps an internal count of the number of TAs included in the Allowed Area compared to the maximum number of allowed TAs that now include TA1 and TA2 (i.e. two TAs). When the user moves to TA2 and TA4 the UE may get a RA of TA1, TA2, TA4 and Allowed Area of the same TAs. When the user moves outside its home e.g. to TA5, then the UE gets a new RA set to TA5 and the information that TA5 is a Non-Allowed Area. The user cannot use its UE for normal services therefore. When the user moves to their home again, then the UE is likely to get a RA of TA1, TA2 and TA4 and the information that TA1, TA2 and TA4 are parts of the Allowed Area. The user can now use their UE again.

The concept of a hypervisor is the most common basis for classifying approaches

Fig. 7.5. Building crossing Tracking Areas to be used for an Allowed Area.

The size of a TA is dependent on many aspects e.g. number of cells used and if higher radio frequencies are used. While the example of a building crossing three TAs might not be that common as a TA is usually larger than one building, the usage of high radio frequencies and the fact that 5GS uses three octets for identifying TAs compared to two octets for EPS enables 5GS a larger possibility to deploy smaller TAs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081030097000077

Virtualization

Dijiang Huang, Huijun Wu, in Mobile Cloud Computing, 2018

Abstract

Virtualization is the act of creating a virtual version of computing hardware, storage devices, and computer network resources. There are three classifications of virtualization technologies. The first classification is based on the internal process model. The second classification covers most of virtualization techniques at different levels of implementation within a computer. The third classification introduces two types of hypervisor. Besides virtualization through hypervisors, the container is a process-based lightweight virtualization, which has gained its momentum recently due to its efficiency and mobility. Besides computer virtualization, network virtualization (e.g., virtual networks, SDN, NFV, etc.), mobile virtualization (e.g., KVM over ARM), and storage virtualization (e.g., blob store, file store, etc.) are also important virtualization technologies. In this chapter, various virtualization techniques are surveyed, which form the foundation to build cloud computing service models.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012809641300003X

Virtualization on embedded boards as enabling technology for the Cloud of Things

B. Bardhi, ... L. Taccari, in Internet of Things, 2016

Abstract

Nowadays, new computing paradigms are emerging from the interlinked networks of sensors, actuators, and processing devices: Cloud of Things and Fog Computing. As a consequence of this trend, there is a strong need for virtualization solutions on embedded systems. In relation to this, two fundamental questions arise: (1) Are the current hardware technologies for embedded systems appropriate for virtualization? (2) Are the current virtualization techniques appropriate for embedded systems, and, if so, what is the most appropriate technique? This chapter tries to provide an answer to the aforementioned questions. In order to do that, three basic virtualization techniques for embedded systems are considered: full virtualization, paravirtualization (as instances of hardware-level virtualization), and containers (as an instance of operating-system-level virtualization). From a practical point of view, this chapter reports some experimental results using as testbed a Cubieboard2: a low-cost, low-energy, ARM Cortex-A7-based device with limited performance, but with hardware that supports virtualization. Such experiments first allow us to provide a positive answer to the aforementioned questions; however, some aspects, such as real-time virtualization, should still be explored and be a part of future research.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012805395900006X

LTE Broadcast for Public Safety

Tien-Thinh Nguyen, ... Ngoc-Duy Nguyen, in Wireless Public Safety Networks 2, 2016

9.3.3.2 Isolated E-UTRAN (LTE in a box)

In some situations, for example, for a natural disaster occurring at a scene outside wide area network coverage, operators may deploy a portable or moving base station (BS), which typically includes an eNB and a lightweight integrated EPC, to ensure minimum services for the first responders. The EPC can be realized by relying on virtualization technique, thus, called virtual EPC (vEPC) [NOB 15]. In this case, the backhaul connection is either limited or totally lost.

Since the vEPC can act as normally with a limited capacity, minimal services such as MCPTT can still be provided. MCPTT AS in this case can be deployed as a local data service by a virtualized function inside the portable/moving BS. In this situation, the distributed MCE architecture may be more suitable than the centralized MCE architecture thanks to its simplicity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781785480522500098

On preserving privacy in cloud computing using ToR

A. Carnielli, ... M. Alazab, in Pervasive Computing, 2016

2.1 Cloud Computing Reference Model

In cloud computing, because of the extremely high availability of hardware and software components, thanks to the improvements in virtualization techniques, resources appear infinite to the consumers, who do not have to worry about any possible under-provisioning. All resources, in fact, are being used almost ceaselessly since they are virtualized (estimates of the average server utilization in conventional data centers range from 5% to 20% according to Rivlin, 2008) and hence their throughput is maximized. Apart from under-provisioning, users do not need to worry about over-provisioning either because cloud consumers pay only for the resources they need on a very short-term basis (for certain resources, such as CPUs, the granularity is as fine as an hourly fee).

To appreciate the benefits of cloud computing to online service providers, we give the following examples. Considering a photo-sharing website: it is anticipated that during “high seasons” the website will most likely experience huge demands on its resources and, in contrast away from these seasons the demands will be minimal. By migrating the service to the cloud, the company managing the website need not own any of the equipment needed to run its business and all hardware and software may be kept and managed remotely by the CP. The great advantage of this approach is that the company will benefit from the elasticity of cloud computing; the company will demand and pay for more resources from the CPs, these resources will be released if no more are needed after the end of “high season.” This example highlights the importance of cloud computing for flourishing new businesses. One major technology that enables this flexibility in cloud computing is virtualization. With virtualization, a CP uses a virtual machine monitor to manage and share the hardware resources among different virtual machines and can also seamlessly set up and run more virtual machines to accommodate customers’ demands. This concept is shown in Fig. 2.

The concept of a hypervisor is the most common basis for classifying approaches

Fig. 2. VM architecture.

From Walker, G., 2012. Cloud Computing Fundamentals. https://www.ibm.com/developerworks/cloud/library/cl-cloudintro/.

As shown in Fig. 3, cloud computing comprises the following computing services at both hardware and software levels (Aiash et al., 2015).

The concept of a hypervisor is the most common basis for classifying approaches

Fig. 3. Cloud computing architecture.

From Aiash, M., et al., 2015. Introducing a hybrid infrastructure and information-centric approach for secure cloud computing. In: Proceedings of IEEE 29th International Conference on Advanced Information Networking and Applications Workshops (WAINA).

Infrastructure as a Service (IaaS) provides the infrastructural components in terms of processing, storage, and networking. It uses virtualization techniques to provide multi-tenancy, scalability, and isolation; different virtual machines can be allocated to a single physical machine known as the host. Examples of such service are Amazon S3, EC2; Mosso and OpenNebula (Varia et al., 2014; Llorente, 2014).

Platform as a Service (PaaS) provides the service of running applications without the hassle of maintaining the hardware and software infrastructure of the IaaS. Google App Engine and Microsoft Azure (Tulloch, 2013) are examples of PaaS.

Software as a Service (SaaS) is a model of software deployment that enables end-users to run their software and applications on-demand. Examples of SaaS are Salesforce.com and Clarizen.com.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128036631000012

State of the Art on Technology and Practices for Improving the Energy Efficiency of Data Storage

Marcos Dias de Assunção, Laurent Lefèvre, in Advances in Computers, 2012

4.3.1 Combining Server and Storage Virtualization

By combining server virtualization with storage virtualization it is possible to create disk pools and virtual volumes whose capacity can be increased on demand according to the applications’ needs. Typical storage efficiency of traditional storage arrays is in the 30–40% range. Storage virtualization can increase the efficiency to 70% or higher according to certain reports [35], which results in less storage requirements and energy savings.

Storage virtualization technologies can be classified in the following categories [24]:

Block-level virtualization: this technique consists in creating a storage pool with resources from multiple network devices and making them available as a single central storage resource. This technique, used in many SANs, simplifies the management and reduces cost.

Storage tier virtualization: this virtualization technique is generally termed as Hierarchical Storage Management (HSM) and allows data to be migrated automatically between different types of storage without users being aware. Software systems for automated tiering are used for carrying out such data migration activities. This approach reduces cost and power consumption because it allows only data that is frequently accessed to be stored on high-performance storage, while data less frequently accessed can be placed on less-expensive and more power-efficient equipments that use techniques such as MAID and data de-duplication.

Virtualization across time to create active archives: this type of storage virtualization, also known as active archiving, extends the notion of virtualization and enables online access to data that would be otherwise offline. Tier virtualization software systems are used to dynamically identify the data that should be archived on disk-to-disk backup or tape libraries or brought back to active storage.

Storage virtualization is a technology that complements other solutions such as server virtualization by enabling the quick creation of snapshots and facilitating virtual machine migration. It also allows for thin provisioning where actual storage capacity is allocated to virtual machines when they need to write data rather than allocated in advance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123965288000043

What is a hypervisor and what is its purpose?

A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and processing.

What is the main function of a hypervisor quizlet?

What is the main function of a hypervisor? It is a device that filters and checks security credentials. It is used to create and manage multiple VM instances on a host machine.

What are the basic concepts of virtualization?

Virtualization relies on software to simulate hardware functionality and create a virtual computer system. This enables IT organizations to run more than one virtual system – and multiple operating systems and applications – on a single server. The resulting benefits include economies of scale and greater efficiency.

What hypervisor approach is the most secure?

Type I hypervisors are also more secure than type II hypervisors. Hosted hypervisors, on the other hand, are much easier to set up than bare metal hypervisors because you have an OS to work with. These are also compatible with a broad range of hardware.