Which of the following techniques should be used to mitigate the risk of data remanence when moving virtual hosts from one server to another in the cloud?

Cloud Service Providers and the Cloud Ecosystem

Dan C. Marinescu, in Cloud Computing (Second Edition), 2018

2.11 User Experience

There are a few studies of user experience based on a large population of cloud computing users. An empirical study of the experience of a small group of users of the Finish Cloud Computing Consortium is reported in [385]. The main user concerns are: security threats; the dependence on fast Internet connection; forced version updates; data ownership; and user behavior monitoring. All users reported that trust in the cloud services is important, two thirds raised the point of fuzzy boundaries of liability between cloud user and the provider, about half did not fully comprehend the cloud functions and its behavior, and about one third were concerned about security threats.

The security threats perceived by this group of users are: (i) abuse and villainous use of the cloud; (ii) APIs that are not fully secure; (iii) malicious insiders; (iv) account hijacking; (v) data leaks; and (vi) issues related to shared resources. Identity theft and privacy were a major concern for about half of the users questioned; availability, liability and data ownership and copyright was raised by a third of respondents.

The suggested solutions to these problems are: Service Level Agreements and tools to monitor usage should be deployed to prevent the abuse of the cloud; data encryption and security testing should enhance the API security; an independent security layer should be added to prevent threats caused by malicious insiders; strong authentication and authorization should be enforced to prevent account hijacking; data decryption in a secure environment should be implemented to prevent data leakage; and compartmentalization of components and firewalls should be deployed to limit the negative effect of resource sharing.

A broad set of concerns identified by the NIST working group on cloud security includes:

Potential loss of control/ownership of data.

Data integration, privacy enforcement, data encryption.

Data remanence after de-provisioning.

Multi tenant data isolation.

Data location requirements within national borders.

Hypervisor security.

Audit data integrity protection.

Verification of subscriber policies through provider controls.

Certification/Accreditation requirements for a given cloud service.

A 2010 study conducted by IBM [246] aims to identify barriers for public and private cloud adoption. The study is based on interviews with more than 1000 individuals responsible for IT decision making around the world. 77% of the respondents cited cost savings as the key argument in favor of public cloud adoption, though only 30% of them believed that public clouds are “very appealing or appealing” for their line of business, versus 64% for private clouds, and 34% for hybrid ones.

The reasons driving the decision to use public clouds and percentage of responders who considered each element as critical are shown in Table 2.6. In view of the high energy costs for operating a data center discussed in Section 9.2 it seems strange that only 29% of the respondents seem to be concerned about lower energy costs.

Table 2.6. The reasons driving the decision to use public clouds.

ReasonPercentage who agree
Improved system reliability and availability50%
Pay only for what you use50%
Hardware savings47%
Software license saving46%
Lower labor costs44%
Lower maintenance costs42%
Reduced IT support needs40%
Ability to take advantage of the latest functionality40%
Less pressure on internal resources39%
Solve problems related to updating/upgrading39%
Rapid deployment39%
Ability to scale up resources to meet the needs39%
Ability to focus on core competencies38%
Take advantage of the improved economics of scale37%
Reduced infrastructure management needs37%
Lower energy costs29%
Reduced space requirements26%
Create new revenue streams23%

The top workloads mentioned by the users involved in this study are: data mining and other analytics (83%), application streaming (83%), help desk services (80%), industry specific applications (80%), and development environments (80%).

The study also identified workloads that are not good candidates for migration to a public cloud environment:

Sensitive data such as employee and health care records.

Multiple co-dependent services, e.g., online transaction processing.

Third party software without cloud licensing.

Workloads requiring auditability and accountability.

Workloads requiring customization.

Such studies help identify the concerns of potential cloud users and the critical issues for cloud research.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128128107000029

Cloud Computing Architecture

Vic (J.R.) Winkler, in Securing the Cloud, 2011

Private Clouds

In contrast to a public cloud, a private cloud is internally hosted. The hallmark of a private cloud is that it is usually dedicated to an organization. Although there is no comingling of data or sharing of resources with external entities, different departments within the organization may have strong requirements to maintain data isolation within their shared private cloud. Organizations deploying private clouds often do so utilizing virtualization technology within their own data centers. A word of caution here: “Describing private cloud as releasing you from the constraints of public cloud only does damage to the cloud model. It's the discipline in cloud implementations that makes them more interesting (and less costly) than conventional IT. Private clouds could very well be more constrained than their public counterparts and probably will be to meet those needs that public clouds cannot address.”9

Since private clouds are, well, private, some of the security concerns of a public cloud may not apply. However, just because they are private does not mean that they are necessarily more secure. In a private cloud, considerations such as securing the virtualization environment itself (that is, hypervisor level security, physical hardware, software, and firmware, and so on) must still be addressed, whereas in a public cloud, you would rely on the provider to do so. As a result, when comparing public to private clouds, it may be difficult to make generalizations as to which is inherently more secure. But as we pointed out earlier in this chapter in the section on Control over Security in the Cloud Model, a private cloud offers the potential to achieve greater security over your cloud-based assets. However, between the potential for better security and the achievement of better security lie many ongoing activities. The true advantage of a private cloud is that “the provider has a vested interest in making the service interface more perfectly matched to the tenant needs.”10 However, it should also be pointed out that many of the sins of enterprise security have to do with the fact that the enterprise itself implements and manages its own IT security—which would be perfectly fine except security is generally not a core investment nor is it measured as though it were.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495929000026

Selecting and Engaging with a Cloud Service Provider

Raj Samani, ... Jim Reavis, in CSA Guide to Cloud Computing, 2015

Service Level Agreements

The typical approach for organizations to define the requirements they expect of the outsourced provider will be to leverage a series of service level agreements (SLAs). According to the European Commission’s publication “Cloud Computing Service Level Agreements,”6 an SLA is defined as

A Service Level Agreement (SLA) is a formal, negotiated document that defines (or attempts to define) in quantitative (and perhaps qualitative) terms the service being offered to a Customer. Any metrics included in a SLA should be capable of being measured on a regular basis and the SLA should record by whom

These SLAs are likely to incorporate many more requirements than security, but should be detailed to define not only the expectations, but also how monitoring of these SLAs are achieved. According to the European Network Information Security Agency (ENISA) in their 2012 report entitled “Procure Secure, A Guide to Monitoring of Security Service Levels in Cloud Contracts,”7 there is a difference in the ability of the end customer to negotiate SLAs with the provider: “It is important to differentiate between small projects, in which the customer will simply make a choice between different types of service and their SLAs (service level agreements) offered on the market, and large projects, where the customer may be in a position to negotiate with providers about the kind of service or SLA required.”

Regardless of whether the end customer has the ability to negotiate service levels, as a minimum it is important to understand how the security controls will be deployed and monitored. Of course, the level of due diligence undertaken will be dependent on the value of data being hosted/processed by the provider. The guidance provided within the ENISA guidance provides “the most relevant parameters which can be practically monitored”; these are defined as follows:

1.

Service availability.

2.

Incident response.

3.

Service elasticity and load tolerance.

4.

Data lifecycle management.

5.

Technical compliance and vulnerability management.

6.

Change management.

7.

Data isolation.

8.

Log management and forensics.

It may not be necessary for the end customer to define, or even review, all of the above service levels defined by the provider. This will be entirely dependent on the type of service being sought; for example, within an IaaS environment, the end customer will likely be responsible for ensuring security patches are deployed on the operating system and applications. Subsequently, it is recommended to ensure that the scope of the project and customer expectations are clearly defined before any provider is engaged. This will allow any potential negotiation to be concluded more effectively, and where negotiation is not possible for the review of services to be more effective. It is recommended for the ENISA guidance to be reviewed to obtain further detail on the aforementioned parameters. Of course, that is the recommendation, and all cloud customers follow the recommended guidance, do they not?

Sadly, the reality is that many cloud customers fail to undertake the appropriate level of due diligence when defining the security service levels they expect from their providers. In a survey conducted by ENISA, and subsequently published in “Survey and analysis of security parameters in cloud SLAs across the European public sector”8 the results revealed that, while availability is often covered, many security parameters are rarely covered. This survey requested content from information technology (IT) officers within the European public sector, and therefore the results reflect the implementation of cloud or IT projects for the European public sector.

The following are the key findings:

Governance frameworks/security standards: While ISO 2700x and Information Technology Infrastructure Library (ITIL) were the frameworks/standards used by the majority of end customers, ensuring alignment with their third-party providers is rarely done. For example, only 22% of respondents stated that their IT providers are obliged to adhere to the same standards. Indeed, almost one in five (19%) of providers were not obliged to comply with their customers. Although it is suggested that this may undermine the ISMS of the end customer, not having consistency will likely increase the cost of managing (e.g., hiring subject matter experts to understand the standards used by the provider), as well as the efficiency of managing security.

Availability: The majority of SLAs (50%) referring to availability ensures that the “service is reachable by all clients”; however, 1 in 10 have not defined availability. This is rather surprising as this particular requirement would be the easiest to measure and would be an important requirement regardless of the type of service provisioned. This would explain why just over the same percentage (11%) of respondents did not know what the requirements actually were. The majority of SLAs (37%) referring to availability dictated two nines or 99%, which would suggest that the majority of externally provisioned services were not particularly critical (only 8% required four nines or 99.99%). Equally, the majority of providers (70%) are obliged to report downtime within a given timeframe. Please see Table 2.2 for the percentage calculation, with the results from the ENISA survey.

Table 2.2. Downtime Calculation

AvailabilityDowntime p/yearDowntime p/monthDowntime p/weekSurvey Result (%)
99% (two nines) 3.65 days 7.2 h 1.68 h 37
99.9% (three nines) 8.76 days 43.8 min 10.1 min 19
99.99% (four nines) 52.56 min 4.32 min 1.01 min 8

Security provisioning: While there is a degree of consistency with most respondents defining requirements for availability, security is less well defined. To an extent, this is not hugely surprising, as the services that are provisioned externally are not particularly critical (with only 8% dictating very high availability), but nonetheless the level of disparity among respondents is still surprising. Figure 2.2 shows the disparity regarding the components explicitly defined with the SLA.

Which of the following techniques should be used to mitigate the risk of data remanence when moving virtual hosts from one server to another in the cloud?

FIGURE 2.2. Aspects explicitly defined in SLAs.

Security incidents: One example of a requirement that is not defined as consistently as availability is security incidents. A total of 42% of respondents commented that SLAs did not include a classification of incidents, and a further 32% stated they only briefly commented on how the schema works. Although other categories are entirely subjective in their necessity (e.g., take cryptography is unlikely to be required when the data managed by the third party is not sensitive), there is no question that the need to have consistency regarding security incidents will be important to all customers. In addition, only half of respondents confirmed that providers were obliged to notify of security incidents within a given timeframe, and in only 43% of cases did the SLAs specify a recovery time for incidents where the provider faced penalties for not recovering within the time specified. To put this into perspective, less than half of providers were obliged to notify customers that their data/systems had experienced a security incident and even if they did the provider was not financially incentivized to recover in the time specified! This sounds remarkable, and one could easily point the finger at the provider, but sadly this demonstrates that, although the customer is ultimately responsible for ensuring such measures are clearly defined, they are on the whole failing to do so. This could of course be because the service that is being provisioned is of so little value to the end customer that any form of interruption is acceptable (however, reviewing the responses regarding availability we know this is not entirely the case).

Service levels: Building upon the lack of penalties applied to providers in the event of a security incident, the survey revealed that almost one in four end customers do not define penalties in the event the provider fails to meet service levels.

Message from Said Tabet (CSA Co-Chair of the SLA Working Group), “Within the CSA we recognize the value in SLAs as they relate to cloud computing. We are therefore conducting a lot of work within this space, and the reader is encouraged to understand the areas of focus for our working group.”

When engaging with a cloud provider, it is recommended for the SLAs to reflect the security requirements of the end customer. The level of flexibility afforded to the end customer will be limited, according to the Cloud Buyer’s Guide9: “A typical cloud contract will have little flexibility in negotiating terms and penalties — the only real option in a true cloud implementation will be a credit for downtime, with no enhancements for loss of business critical functionality.” Subsequently, undertaking the appropriate due diligence in determining the security controls provided is imperative, this should be followed up with the SLAs, which according to the buyers guide “The single most important aspect of any liability and warranty statement revolves around breach of security and loss of personal data. The average cost per user of a data breach is approximately $215, so the liability language in the contract should ensure that the insurance coverage provided by the provider could cover the total fees for a breach where the provider is at fault. This liability clause must ensure that the provider WILL and MUST allow for a third party to perform a root cause analysis and be bound by the findings of the party. This may need to be negotiated prior to the execution of a contract.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124201255000029

A Survey of Exploitation Techniques and Defenses for Program Data Attacks

Ye Wang, ... Guimin Zhang, in Journal of Network and Computer Applications, 2020

5.2.3 Data isolation

Data isolation techniques logically or physically isolate the storage area of data in the address space layout of target program and enforce corresponding access control strategies for different isolation areas to protect program data from being manipulated, thereby preventing PDAs. We divide data isolation techniques into two categories: software-based data isolation and hardware-based data isolation.

1)

Software-based data isolation

Segmentation and paging mechanisms are an example of software-based data isolation. They build a layer of indirection for memory accesses in the operating system, and the CPU enforces access control that is configured by the operating system while resolving the indirection. Segmentation allows developers to define the start address, size, and the access permission of a segment. Unfortunately, segmentation has disappeared from modern architectures. However, on modern operating systems, paging creates an indirection that maps virtual memory to physical memory via page tables, which contain the translation information and a variety of access permissions. The access permissions in page tables can be either the kernel mode or the user mode, which enable the isolation between the operating system and user applications. To enable process isolation, the operating system allows for the illusion of a contiguous virtual address space for every process, and the page table of the process cannot be used by other processes. However, the paging mechanism does not differentiate between the read and execute permission. Execute-no-Read (XnR) (Backes et al., 2014) extends the memory management system of the operating system to enforce XnR primitive, i.e., executable-only memory, which distinguishes between legitimate code execution (instruction fetch) and illegal access to code using load/store instructions. Therefore, XnR prevents the step of dynamically gathering gadgets in JIT-ROP attacks. Software fault isolation (SFI) (Mao et al., 2011) inserts run-time checks before every memory access in the untrusted code and provides data isolation in unsafe languages by preventing accesses to the protected region. However, it imposes a substantial overhead on all execution of untrusted code. Moreover, SFI protects only against bugs, not against active attacks such as ROP attacks that bypass the memory access checks. To enforce strong security, SFI must be coupled with an additional solution (e.g., code pointer integrity (CPI) (Kuznetsov et al., 2014) (Evans et al., 2015)) to prevent advanced ROP attacks.

2)

Hardware-based data isolation

Extended page tables (EPT) are configured by the hypervisor to facilitate memory virtualization, and they support setting read/write/execute permissions individually. Readactor (Crane et al., 2015) is an example of hardware-based data isolation that enforces execute-only pages via EPT. By using EPT, No-Execute-After-Read (NEAR) (Werner et al., 2016) executes code or data if and only if it has not been previously read, thus preventing any disclosed code from subsequently being executed and thwarting JIT-ROP attacks. Memory protection extension (MPX) (Oleksenko et al., 2017) was introduced in the Skylake CPU series and provides architectural support for bound checking. BoundShield (Jin et al., 2018) divides the user address space into two regions, i.e., the regular region and the secret region, and provides strong access isolation for the two regions by leveraging MPX. Memory protection keys (MPKs) (Jin et al., 2018) introduced a new register that contains a protection key and enables programmers to tag memory. MPK associates one of 16 protection keys with each memory page, thus partitioning the address space into up to 16 domains. ERIM (Vahldiek-Oberwagner et al., 2018) enables data isolation within user-space applications with MPK and supports in-process isolation with low overhead, even at high switching rates between the untrusted application and the trusted component. IMIX (Frassetto et al., 2018) and MicroStache (Mogosanu et al., 2018) provide novel instruction set architecture (ISA) extensions to provide effective and efficient in-process isolation. In-process isolation is a mechanism that ensures access only by the defence code while denying access by the potentially vulnerable application code. Unlike previous data isolation methods, in-process isolation aims to decrease the runtime overhead incurred by frequent context switches between the regular region and the secret region, and protect the data region of some defence mechanism (e.g., CPI (Kuznetsov et al., 2014) (Evans et al., 2015)) from unintended or malicious accesses. HardScope (Nyman et al., 2017) introduced a set of seven new instructions to achieve program in-process data isolation on the open-source RISC-V Pulpino microcontroller and mitigates DOP attacks. The HardScope instructions are instrumented in the target program to dynamically create rules that determine which code blocks can access which pieces of memory, thereby enforcing all memory access constraints at run time.

The software-based data isolation techniques introduce a non-negligible performance overhead and are impractical for application, whereas hardware-based data isolation techniques rely on specific hardware and are difficult to deploy widely. Additionally, recent attacks have shown that hiding-based defence strategies are a weak isolation model because sensitive data are not truly isolated. An adversary can probe the safe regions with ingenious methods and therefore bypass the defence (Oikonomopoulos et al., 2016) (Evans et al., 2015) (Göktaş et al., 2016).

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804520300084

The future of sustainable chemistry and process: Convergence of artificial intelligence, data and hardware

Xin Yee Tai, ... Jin Xuan, in Energy and AI, 2020

Abstract

Sustainable chemistry for renewable energy generation and green synthesis is a timely research topic with the vision to provide present needs without compromising future generations. In the era of Industry 4.0, sustainable chemistry and process are undergoing a drastic transformation from continuous flow system toward the next level of operations, such as cooperating and coordinating machine, self-decision-making system, autonomous and automatic problem solver by integrating artificial intelligence, data and hardware in the cyber-physical systems. Due to the lack of convergence between the physical and cyber spaces, the open-loop systems are facing challenges such as data isolation, slow cycle time, and insufficient resources management. Emerging researches have been devoted to accelerating these cycles, reducing the time between multistep processes and real-time characterization via additive manufacturing, in-/on-line monitoring, and artificial intelligence. The final goal is to concurrently propose process recipes, flow synthesis, and molecules characterization in sustainable chemical processes, with each step transmitting and receiving data simultaneously. This process is known as ‘closing the loop’, which will potentially create a future lab with highly integrated systems, and generate a service-orientated platform for end-to-end synchronization and self-evolving, inverse molecular design, and automatic science discovery. This perspective provides a methodical approach for understanding cyber and physical systems individually, enabled by artificial intelligence and additive manufacturing, respectively, in combination with in-/on-line monitoring. Moreover, the future perspective and key challenges for the development of the closed-loop system in sustainable chemistry and process are discussed.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2666546820300367

Chen Wu, ... Hubert Konik, in Journal of Systems Architecture, 2021

6.1 Unresolved challenges

Some challenges of using FPGAs in the cloud have not been fully resolved owing to their complexity. Here, we describe two major challenges: isolation and diversity.

6.1.1 Isolation

With the increasing efforts to provide a cloud environment for multiple tenants to deploy DNNs on the shared FPGAs, resources and performance isolation have become a concern in the cloud.

DNN accelerators on the FPGA usually run under full hardware access and may share resources. Therefore, malicious code can attack the entire platform for other tenancies [112,113]. Additionally, dataset collection can be time-consuming and expensive—particularly in industrial cases where datasets are of significant commercial value. Providing strict data and resource isolation for multiple tenants can prevent unauthorized access to the dataset and avoid data leakage [114,115].

Additionally, a DNN application may affect the performance of other DNN applications during concurrent execution [112,116], which causes unreliable performance. However, few works [44,93] discuss performance isolation problems, and their isolation remains underexplored.

6.1.2 Diversity

Diversity of DNN functions: Owing to resource limitations and development difficulties, the networks reported in the literature are standard (such as AlexNet and VGG) with common functions (such as convolution and pooling). With the continuous emergence of DNNs, the current DNN functions that can be implemented on FPGAs lack consistency with the development of DNN algorithms. However, the cloud environment provides more possibilities for exploring the deployment of DNNs with a rich set of functions on FPGAs by providing more resources and abstraction layers and can promote the diversity of DNN IP development.

Diversity of DNN usage: Training is a difficult phase to be performed on the FPGA, because all the features must be stored in memory until the corresponding errors are backpropagated, which requires more storage than inference. Existing works mainly focus on performing DNN inferences with relatively simple functions on the FPGA. Benefiting from the “unlimited” capacity and resources provided by the FPGA cloud, DNN training, fine-tuning, transfer learning, and the support of new functions in DNNs will be more feasible.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1383762121001752

Ethical Principles and Governance Technology Development of AI in China

Wenjun Wu, ... Ke Gong, in Engineering, 2020

2.1 Data security and privacy

Data security is the most basic and common requirement of ethical principles for AI. Many governments including European Union (EU), United States, and China are establishing legislation to protect data security and privacy. For example, the EU enforced the General Data Protection Regulation (GDPR) in 2018, and China enacted the Cybersecurity Law of the People’s Republic of China in 2017. The establishment of such regulations aims to protect users’ personal privacy, and poses new challenges to the data-driven AI development commonly adopted today.

In the paradigm of data-driven AI, developers often need to collect massive data from users in a central repository and carry out subsequent data processing, including data cleaning, fusing, and annotation, to prepare datasets for training deep neural network (DNN) models. However, the newly announced regulations hamper companies from directly collecting and preserving user data on their cloud servers.

Federated learning, which can train machine learning models across decentralized institutions, presents a promising solution to allow AI companies to address the serious problem of data fragmentation and isolation in a legal way. Researchers from the Hong Kong University of Science and Technology and other institutes [4] have identified three kinds of federated learning modes: horizontal federated learning, vertical federated learning, and federated transfer learning. Horizontal federated learning is applicable when participating parties have non-overlapping datasets, but share the same feature space in data samples. Vertical federated learning is applicable when the datasets from participants refer to the same group of entities but differ in their feature attributes. When the datasets cannot meet either condition (i.e., they have different data samples and feature space), federated transfer learning is a reasonable choice. Using these modes, AI companies are always able to establish a united model for multiple enterprises without sharing their local data in a centralized place.

Federated learning not only presents a technical solution for privacy protection in the collaborative development of distributed machine learning models among institutions, but also indicates a new business model for developing a trusted digital ecosystem for the sustainable development of an AI society. By running federated learning on blockchain infrastructure, it may be possible to motivate members in the digital ecosystem via smart contracts and trusted profit exchanges to actively share their data and create federated machine learning models.

Federated learning is increasingly being adopted by online financial institutions in China. WeBank established an open-source project on federated learning and contributed the Federated AI Technology Enabler (FATE) framework to the Linux foundation. WeBank’s AI team [5] also launched an Institute of Electrical and Electronics Engineers (IEEE) standardization effort for federated learning and has started to draft an architectural framework definition and application guidelines.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2095809920300011

Web application protection techniques: A taxonomy

Victor Prokhorenko, ... Helen Ashman, in Journal of Network and Computer Applications, 2016

5 Discussion and conclusion

Observing the general picture, some similarities between seemingly different attack types can be noted. For example, XSS attacks are in their core very similar to SQL injection attacks. Both are possible because of context mismatches. In case of SQL injections, user-supplied data is wrongly put in SQL command context by for example escaping the string context or command boundary; for XSS attacks, the user input escapes HTML tags intended by the author. The root of the problem lies in the lack of separation of data and (sometimes multiple) code streams. The misinterpretation of data context can also be explained by the perception difference between the developer and a web browser.

An example of this difference for a web page fragment (central column) is illustrated in Fig. 2. A browser interprets the fragment as having four context transitions. However, the web page developer intended the fragment to have only two context transitions. The first intended transition is the change of context from HTML markup to static text, and the second being the reverse context switch back to HTML markup after the static text ends. However, if a malicious web page visitor manages to inject <img src=”…”/> in place of a legitimate user name, two unexpected context transitions are introduced. The consequences vary depending on the actual type of context transitions. For example, an unexpected text to HTML transition may lead to an XSS attack.

Which of the following techniques should be used to mitigate the risk of data remanence when moving virtual hosts from one server to another in the cloud?

Fig. 2. Web page perception difference.

Despite the similarities between the roots of XSS and SQL injections attacks, it may be hard to apply the same protection against both due to technical difficulties. For example, detecting XSS attacks can be performed by analysing the HTTP responses, but for SQL injections, other sources such as PHP code analysis or run-time database request monitoring are required. Given the root cause similarities for these kinds of attacks, a unified approach would be beneficial.

Such context mismatch based attacks are possible because of lack of user-supplied data isolation. Some approaches achieve such data isolation by introducing an additional abstraction layer, which treats user-supplied input in a special way (Doupé et al., 2013). Unfortunately such approaches are difficult to apply to existing legacy web applications. Another limiting factor is the necessity to learn new programming techniques introduced by such approaches.

Taking a wider view it can be seen that user-supplied data isolation is not only inherent to web applications, but for desktop or other types of network applications as well. For example, stack overflow attacks succeed when user-supplied data is misinterpreted as code and subsequently executed. While the ability to separate data and code is a feature inherent to the Harvard CPU architecture, such a separation can be introduced for Von Neumann CPU architecture for security reasons. A common protection against stack overflow attacks involves the use of CPU-specific technology to mark specific memory regions as non-executable (Marco-Gisbert and Ripoll, 2014). Such additional security measures can be effective in some cases, however the use of interpreted programming languages commonly met in web development makes such protection inapplicable. The complications arise because of the additional abstraction layers introduced by the interpreter itself, the crucial difference being a different memory view. At a lower level the data execution is prevented by setting a non-executable (NX) bit for a whole memory page. However, as programming language interpreters tend to provide portability and hence aim to be CPU-independent, such low-level notion as memory pages may be not exposed to the programmer at all. Simply said – the CPU level NX bit would not help securing PHP web applications because both PHP code and HTML data are treated as data from the point of view of the CPU. In contrast, PHP code is treated as code and HTML data as data by the PHP interpreter.

In the context of interpreters, a possible solution would be to introduce data and code separation at the interpreter level, for example, treating all string variables as objects having a source attribute and applying tainting techniques to keep track of the source of a given string. Such source information will be lost if the string is simply transferred over a network. Thus, rather than introducing a new data transfer method (which might be complicated due to the compatibility reasons), the generated web page should be analysed on the server side first. In case the analysis does not reveal any context violations, the generated page is passed further to the client side. Otherwise, the generated page should be dropped and an alarm could be raised.

An important advantage of such an approach is the ability to automatically retrieve the intended web page structure. The developer intentions are a critical part of the design of an application. Programming bugs commonly found in applications are essentially a mismatch between the developer׳s intentions and actual application behaviour (unless the developer makes mistakes on purpose). However the developer׳s intentions may be completely overlooked by black-box protection approaches. Such neglect is typical for anomaly detection based protections which aim to determine whether an application behaves “as usual” rather than “as intended”. An alarm should still be raised if an application behaves as usual, but not as intended. Unfortunately, detecting intentionally malicious functionality is a more complicated process requiring a formal definition of what malicious functionality is. Even human brains might sometimes fail to recognise dangerous code for long periods of time (Imran et al., 2014) making it unlikely for reliable, fully-automated solutions to emerge in the nearest future.

Another advantage in contrast to probabilistic detection approaches would be precise detection. Observing a behaviour of an application and having access to the source code it is relatively easy to confirm whether the behaviour observed was intended. Thus, unlike for probabilistic approaches, a precise decision is possible if the developer intent (derived from the available source code) is taken into account.

Decisions made based on the developer intent only rely on the fixed implementation. Anomaly-detection based protections are commonly probabilistic and are significantly different in nature as such protections rely on the observed application behaviour to make decisions (Kruegel and Vigna, 2003; Mutz et al., 2006). However, the observed application behaviour is directly affected by the behaviour exhibited by the application users. In other words, anomaly-detection based protection decisions depend on how the users use the application rather than on how the application itself is designed. While focusing on user actions does indeed allow one to detect some anomalies, such anomaly detection does not allow one to determine whether the observed application behaviour is as expected.

In contrast, relying only on the developer intent obviously means that such an approach would not be effective in case the protected application is intentionally dangerous. Consider the application that has a backdoor to allow an attacker to perform arbitrary operations without signing in. None of the attacks performed with the use of the backdoor will be detected, as the observed behaviour can be assumed to be intended. Therefore, using such a protection approach for third-party applications may be essentially a self-deception. However, the same problem is applicable to anomaly detection based protections if an attacker uses the backdoor frequently enough (including the training phase) for the protection to consider the observed behaviour to be usual. Such a limitation naturally narrows the applicability of the approach to in-house solutions or applications with trusted code base. Table 3 provides a summary of conceptual advantages and disadvantages inherent to the proposed classification properties.

Table 3. Comparison of protection techniques highlighting advantages and disadvantages.

Classification propertyAdvantagesDisadvantages
StatisticsMay be able to detect unknown types of attacks. Imprecise by design.
Application developer intent is not directly taken into account.
Require training phase (clean environment and functionality coverage issues).
May be subject to malicious retraining.
PolicyEasy to implement. Application developer intent is not considered.
Low false–positives rate.Vulnerable to 0-day attacks.
May be applicable for applications containing intentionally malicious code (backdoors).Requires rule set maintenance.
May be ineffective if an attacker is aware of the exact rule set deployed.
IntentApplication developer intent explicitly considered. Not applicable for applications containing intentionally malicious code (backdoors).
InputsSource code not required. May not reveal the actual problem in the application.
Easy to implement.Selection of inputs to monitor not formalised.
Programming language independent.Application behaviour ignored.
ApplicationAllows detecting and fixing bugs in the application itself. Source code required.
Programming language specific.
OutputsSource code not required. May not reveal actual problems in the application.
Easy to implement.Selection of outputs to monitor not formalised.
Programming language independent.

In summary, expanding intent-based decisions to consider more intention aspects looks beneficial as such approaches would be able to verify more application behaviour aspects at run time. Detecting unexpected situations (presumably caused by attacks) can also provide enough details to aid in fixing the vulnerabilities.

Perhaps, while much more time-consuming; however, a more general and long-term solution would be to redesign the currently existing web development standards to include such explicit data and code separation. Such significant paradigm shift may indeed help developing more secure web applications; however, the vast amount of currently existing legacy software makes such solution economically infeasible in the nearest future.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804515002908

A stakeholder-oriented security analysis in virtualized 5G cellular networks

Chiara Suraci, ... Antonio Iera, in Computer Networks, 2021

SaaS

Through the Software as a Service (SaaS) model, the MNO provides complete software solutions to consumers, remotely hosting the entire application stack. In this vein, the MNO must cater for the most of the risks, since it has to implement security measures to protect: applications, operating system, virtual resources, and physical infrastructure. The Tenant has no control over the MEC infrastructure and it is only responsible for the security of data. The Subscribers have to rely on the security policies implemented by the Tenant for the protection of their data. Several attacks, among those listed in Section 3.1, shall be considered for this model.

Together with its own data, the MNO stores in its network information on the Tenant and its Subscribers and, thus, it is of paramount importance that strong security measures necessary to protect these data are applied. It has to put a keen attention to both Data Loss prevention, due to failure or attacks, and to Data Isolation, when data belonging to different Tenants share the same MEC server. Besides, Data Breaches can happen both when either a malicious Tenant violates the MNO’s data or, even, the Tenant’s data are violated by a malicious or compromised MNO. For its part, the Tenant has to properly safeguard the data of its Subscribers both from other Tenants and from other users.

The MNO must also implement robust Identity, Credential and Access Management policies to protect its assets and prevent attacks such as Man-in-the-Middle, whereby an attacker (e.g., a malicious or compromised Tenant), can illegitimately access the offered service. Obviously, effective mutual authentication mechanisms could help also the Tenant to avoid services provided by malicious MNO. Additionally, each Tenant must safely guard its identity and especially those of its Subscribers.

Malicious insiders in MNO system administration represent a further risk of compromising the MNO reputation [39] and exposing to security risks the Tenant, which may receive compromised services that may violate its data. In SaaS, this threat is much more dangerous than the PaaS and IaaS cases because the MNO has greater control over the network [15]. Similarly, a malicious entity inside the Tenant’s system could breach the security of Subscribers.

With SaaS, consumers usually access the offered services through Web Browsers, thus, the MNO has not to overlook vulnerabilities in the offered software and in the used protocols (e.g., HTTP) in order to protect its network from attacks. A typical category of attack is the Abuse and Nefarious Use of Services, performed for example by a consumer that executes a Malware Injection attack to the MNO’s MEC server profiting from the services offered.

Last but not least, DoS attacks represent a threat to the whole MNO’s network, because MEC server usually shares the same physical machines with other important network functions, that risk starve when a DoS attack to an offered application occurs.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1389128620312366

Review and analysis of networking challenges in cloud computing

Jose Moura, David Hutchison, in Journal of Network and Computer Applications, 2016

5.1.1 Generic cloud security aspects

Initiating our discussion about security, as a generic (and obvious) but very important topic, one can argue that the security of cloud services should be no worse than that of the network services provided to customers through their local network infrastructures. To achieve this goal, a cloud provider should be conscious of the following aspects:

The cloud provider needs to apply the most recent security patches in its cloud infrastructure, such as firmware, operating systems and applications. Some problems could occur in the cloud operation due to incompatible patches. In this way, a rollback option should be available to change the infrastructure to the last stable configuration.

Data isolation must be supported among multiple VMs sharing the resources of the same physical host. Hypervisors also need their security patches to be up to date.

The cloud paradigm is changing the way the major management functions are deployed. Middleboxes such as load balancers and firewalls are moving from the local infrastructure to the cloud (Sherry et al., 2012). As the cloud infrastructure could be a federation of clouds, then the previous middleboxes should be deployed in a distributed and coordinated way through distinct network domains. This also implies that these middleboxes should be operating with the latest security patches.

Authentication and trust mechanisms are needed by the user and provider alike. In this scenario, SSO could be a good starting point. The spam e-mail problem can be also mitigated in the cloud (e.g. the spam could be verified and filtered in the VS associated with the hypervisor). Some useful techniques to mitigate spam in clouds could include the Sender Policy Framework (SPF) to authenticate the source of each e-mail, and the Apache SpamAssassin Project to classify, rank and filter any unwanted e-mail.

To enable communications among the diverse cloud resources/hosts (sometimes from distinct providers) similar to that of a closed local network, the cloud provider’s resources/hosts need to be reachable in a secure way through Virtual Private Network (VPN) tunnels. In one initiative, which addresses security and transparency simultaneously, CloudNet makes use of a Virtual Private Cloud (VPC), which brings CC and VPN technology together to give the user a private set of cloud resources (Wood et al., 2011).

Cloud services are often made public. Consequently, non-authorized access should be prevented (Patel et al., 2013; Modi et al., 2013). In addition, Distributed Denial of Service (DDoS) attacks carried out by compromised users’ machines generate a large amount of bogus traffic. To avoid the negative impact on the system performance of this traffic, the cloud infrastructure can try to identify that traffic and then discard it from the network, redirecting it to a “black hole”.

The Cloud Security Alliance (CSA) is working on the initial identification of top security threats in cloud systems as well as within mobile computing, and on the establishment of the more convenient actions/strategies to avoid those threats.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S108480451500288X

Which of the following security controls provides Windows system administrators with an efficient way to deploy system configuration settings across many devices?

What security control provides Windows administrators with an efficient way to manage system configuration settings across a large number of devices? Options are : Patch management.

Which of the following vulnerabilities can be prevented by using proper input validation select any that apply?

Proper input validation can prevent cross-site scripting, SQL injection, directory traversal, and XML injections from occurring.

Which of the following types of digital forensic investigations is most challenging due to the on demand nature of the analyzed assets?

Which of the following types of digital forensic investigations is most challenging due to the on-demand nature of the analyzed assets? The on-demand nature of cloud services means that instances are often created and destroyed again, with no real opportunity for forensic recovery of any data.

Which of the following techniques would allow an attacker to get a full listing of your internal DNS information if your DNS server is not properly secured?

OBJ-1.2: A DNS zone transfer provides a full listing of DNS information. If your organization's internal DNS server is improperly secured, this can allow an attacker to gather this information by performing a zone transfer.