In client/server computing, the server provides users access to system resources.

Contingency Planning

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Client/Server Systems

Client/server systems often store data and perform processing on both the server and the client, meaning contingency planning for this type of system needs to address potential disruptions to the server, clients, and the connectivity between clients and server components of the system. Contingency strategies for client/server systems that system owners should consider include: [65]

Using results of the business impact analysis to determine contingency requirements for mitigating possible impacts.

Storing backups for all data and both client and server software components offsite or at an alternate site.

Standardizing hardware, software, and peripherals to facilitate recovery of multiple clients following and outage.

Documenting system configuration information and vendor contact information for hardware, software, and other system components to assist in system recovery or procurement of replacement components.

Aligning contingency solutions with security policies and system security controls, including coordinating security provisions between primary and alternate sites.

Minimizing the amount of data stored on client computers such as user workstations by recommending or requiring data storage on the server or centralized storage technologies.

Automating system and data backups for server and client computers and configuring computers in a manner that facilitates centralized backup, such as by ensuring that computers remain powered up overnight or during scheduled backup windows.

Coordinating contingency solutions with cyber incident response procedures to limit the impact to the system from security incidents and ensure that no compromise of data confidentiality or integrity occurs during contingent operations.

Key decision points for selecting client/server contingency solutions include the choice of backup procedures, media, and hardware and software and of encryption technologies and cryptographic modules to provide sufficiently strong protection for removable media as well as data stored on client and server computers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000151

Service Enablement of Client/Server Applications

Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Replacing Client/Server Systems with a Cloud-based Application

Replacing client/server systems is an option that is often used because the application seems lightweight compared to traditional legacy applications. Therefore, replacing a CRM with a Software as a Service (SaaS)-like system such as salesforce.com may seem like an easy solution. But doing so requires migrating customer data to the cloud and accepting the rules and logic of the new solution. A Gartner report identifies data and vendor lock-in as one of the main drawbacks to replacing in-house applications with SaaS: “Disadvantages can include inconsistent data semantics, data access issues, and vendor lock-in” [3].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496476000090

Introduction

Philip A. Bernstein, Eric Newcomer, in Principles of Transaction Processing (Second Edition), 2009

Client-Server

In a client-server system, a large number of personal computers communicate with shared servers on a local area network. This kind of system is very similar to a TP environment, where a large number of display devices connect to shared servers that run transactions. In some sense, TP systems were the original client-server systems with very simple desktop devices, namely, dumb terminals. As desktop devices have become more powerful, TP systems and personal computer systems have been converging into a single type of computing environment with different kinds of servers, such as file servers, communication servers, and TP servers.

There are many more system types than we have space to include here. Some examples are embedded systems, computer-aided design systems, data streaming systems, electronic switching systems, and traffic control systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558606234000019

DVS Archiving and Storage

Anthony C. Caputo, in Digital Video Surveillance and Security, 2010

Client/Server Architecture

Larger organizations already recognize the benefits of using a client/server system within an enterprise-size environment. However, I believe it's especially important for smaller to mid-sized companies. Microsoft's NOS is an affordable server OS (it comes in multi-tiered pricing) that comes with a suite of solutions. This server can offer smaller companies the same cost-efficiencies and improved processes as the goliaths of industry.

The Windows NOS includes extended built-in functionalities such as the ability to plug and play more computers onto your network without having to touch it. This is accomplished through dynamic host control protocol (DHCP), so when a colleague visits from a satellite office out of town, you don't need to figure out how to get his documents printed, e-mailed, or transferred – just plug him into the network and the NOS will automatically assign him a temporary IP address to access network resources (see Chapter 3 for more information on DHCP). Windows NOS also includes built-in administrative access from a remote location using Terminal Services and Remote Desktop, which can also be used to view video surveillance footage remotely.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856177474000081

DVS Archiving and Storage

Anthony C. Caputo, in Digital Video Surveillance and Security (Second Edition), 2014

Client/Server Architecture

Larger organizations already recognize the benefits of using a client/server system within an enterprise-size environment. However, I believe it’s especially important (and I’ll get into some examples) for the smaller to mid-sized companies. Microsoft’s NOS is an affordable server operating system (it comes in multiple tiered pricing) that comes with a suite of solutions. This server can offer smaller companies the same cost efficiencies and improved processes as the goliaths of industry.

The Windows NOS includes extended built-in functionalities, such as the ability to “plug and play” more computers onto your network without having to touch it. This is accomplished through Dynamic Host Control Protocol (DHCP), so when Johnny visits from a satellite office out of town, you don’t need to figure out how to get his documents printed, emailed, or transferred; just plug him into the network, and the NOS will automatically assign him a temporary IP address to access network resources (see Chapter 3 for more information on DHCP).

The Windows NOS also includes built-in administrative access from a remote location using Terminal Services and Remote Desktop, which can also be used to view video surveillance footage remotely (more on this later).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124200425000095

Securing Web Applications, Services, and Servers

Gerald Beuchelt, in Computer and Information Security Handbook (Third Edition), 2017

Advanced HTTP Security

The basic security functions of HTTP described earlier are sufficient for simple client–server systems, but are hard to manage for complex multiparty interactions. In addition, the most common security transport—TLS—typically requires a comprehensive PKI rollout, especially when using user certificates for mutual authentication.

Typical applications of HTTP applications and services in social networking or cloud environments have use cases that cannot be easily address with basic HTTP authentication schemes. Furthermore, the deployment of PKI in such environments is too expensive or extremely complex: PKI implies a fairly high level of trust in the binding of the credential to a system or the user, which is hard to control in highly dynamic environments.

Based on these constraints a number of large web 2.0 or higher providers (including Google, Twitter, AOL, and others), as well as smaller companies with deep insight into the architectures of dynamic web application and REST-style HTTP services, started in 2004 developing technologies that are complementary to the “heavyweight” SOAP-centric identity management technologies. While initially focused on simple data-sharing use case with limited risk (such as SSO for blog commenting), these technologies have matured to the point where they can be used to secure commercial services and provide a simplified experience for users of social media and other web applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000107

Application Recovery

Gerhard Weikum, Gottfried Vossen, in Transactional Information Systems, 2002

17.5.10 Applicability to Multi-tier Architectures

Our detailed presentation of the server reply logging method in the preceding subsections has been based on a two-tier client-server system architecture. However, the algorithm is applicable in three-tier, and even more general multitier, federated systems given that certain prerequisites are satisfied. The key property on which we have built is that a client runs a single application process and that this application is piecewise deterministic. So the application should be single threaded, should use only synchronous messages where a process is suspended after sending a message and waits until the reply message is received, and should not depend on real-time events such as timers and other external signals.

Although the above property appears to be fairly restrictive at first glance, many Internet applications, including three-tier applications, easily meet these conditions. The trick is that we can view a more complex, multi-threaded application as if it were a set of virtual clients, each running its own single-threaded application. This approach works fine as long as there is no application-relevant shared state among threads. So the only deeper problem would be asynchronous events such as interrupting an application upon an external signal, but these mechanisms are used very infrequently in mainstream business applications.

Once we have conceptually decomposed a complex application, we realize that it also does not matter whether the application runs on a dedicated client machine or on a middle-tier application server (e.g., a Web server). The application server even runs multiple applications, but they can be handled as if they were multiple clients unless there was shared state among the applications of application-noticeable relevance (i.e., discounting low-level state data such as request queues, buffers, etc.). The application server would run the server part of the general algorithm developed in this section with regard to each of its clients, and it would run the client part with regard to the data servers that it interacts with. This way the algorithm can be cascaded along a hierarchy of servers, so that even multi-tier federated applications fall into the scope of applicability. This does not mean, however, that the algorithm is ready for all possible e-Service applications on the Internet. There are, of course, complex applications such as decentralized auctions or collaborative authoring that require even more far-reaching forms of application recovery. Such applications call for future research on recovery guarantees for arbitrary multi-tier systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558605084500181

Other Systems

Craig Wright, in The IT Regulatory and Standards Compliance Handbook, 2008

Publisher Summary

This chapter reviews a number of other audit systems and compliance issues. Auditing mainframe and other legacy systems is far simpler than auditing modern client/server systems. These systems are around far longer and extensive programs exist to manage. It is common for many IT audits to exclude the most critical systems. Through a combination of misunderstanding and aversion to older technologies, legacy systems and mainframes are frequently bypassed. AuditNet is one of the best repositories of audit and compliance programs. It provides both free and subscriber-based access to a large number of audit programs for many systems and compliance structures. Mainframes are considered a legacy system. The resilience of these systems-coupled with the high processing capacity and throughput-means that they have their proponents and are unlikely to disappear anytime soon. They are particularly widespread in environments that use complex, large-scale databases that require high-volume processing available all day, every day. Many auditors avoid mainframes. The combination of specialist skills and a perception of old technology leave these systems at risk.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492669000199

Year 2000 (Y2K) Bug Problems

Steven H. Goldberg, in Encyclopedia of Information Systems, 2003

I.A.1. Internal It Systems

Internal IT systems were the most obvious source of potential Y2K failures and the risk most under the company's immediate control. Vulnerable technologies included mainframes, client/server systems, networks, intranets, personal computers (PCs), and other components of hardware and software infrastructure, as well as application programs, utilities, databases, etc. The nature of Y2K risk often depended on the extent to which components of information systems were under the company's direct control. Many firms had a combination of systems, including applications that were developed entirely in-house, plain vanilla, or modified commercial off-the-shelf (COTS) programs, and custom-built systems and applications. Many companies used software developed by third parties to provide expert application and systems development not available in-house. However, vendors often were not willing to provide the level of Y2K support required, because they were faced with risks to their own bottom line, and possible liability exposures from their noncompliant products.

Companies often depended on maintenance and support agreements with multiple vendors responsible for different parts of the system that had been strung together over time. Dealing with multiple vendors of IT products and services was difficult enough in normal times, and it was fair to assume that Y2K would stress these relationships in new and unpredictable ways. The patchwork quilt of interdependent technologies upon which most companies relied created additional difficulties because Year 2000 required assessment, remediation, and testing of an entire system for which no single vendor bore sole responsibility.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404002008

Kernel specifications

Barry DwyerBarry Dwyer, in Systems Analysis and Synthesis, 2016

5.3 Infrastructure

What do we mean by infrastructure? Infrastructure is the technology within which rules defined by the kernel are embedded. A common infrastructure is a client-server system operating across the Internet . Typically, the server has access to a database and remote clients can view the database using a browser . The server transmits database information to the client, whose browser renders the information according to HTML (Hypertext Mark-up Language ) commands. A user of such a system interacts with the kernel by filling in HTML forms , which the client process transmits to the server. The server interprets the user’s data and queries or updates the database as appropriate.

In an apparently similar situation, using a local-area network, the server merely allows access to a shared database, and the client computers contain the kernel logic. This offers the user closer interaction with the kernel logic.

In the early days of computing, a user might have interacted with the kernel by feeding a batch of punched cards into a reader, waiting for the job to be processed, and reading printed reports — often flagging errors in the input data. With this technology, user interaction was poor and very slow indeed.

Many systems are a mixture of technologies, with an interactive front end and a batch processing back end. For example, an automatic teller machine may record transactions in a database interactively as they occur, but a batch system may produce customers’ monthly statements. In a retail context, customer orders may be recorded interactively, and at the end of the day, a batch system may prepare a report that enables store-hands to take goods in the most efficient manner and pack them for despatch next day. In preparing this book, I am using an interactive editor to create a text containing LaTeX commands, which is ultimately rendered into a printable format by a batch process.

An aircraft’s infrastructure may contain several microprocessors, each dedicated to a specialised task. One might be dedicated to navigation, another to moving a control surface, a third to stabilising the aircraft’s flight, and so on. The pilot interacts with the system by, among other things, moving a joystick. A car might contain an engine-management computer that controls the fuel-air mixture, ignition timing, and valve timing in response to engine speed, etc. The driver interacts with the system by controlling the throttle, the gear shift, and the brakes.

The reason why so many different infrastructures exist is that they aim to optimise different objectives: the efficient use of limited bandwidth, the efficient use of a shared resource, the efficient use of storage technology, the efficient use of human resources, the need for reliability, or the need to minimise product cost. In short, to optimise something at the expense of simplicity. The cost of optimisation is the additional software and hardware needed to allow different parts of a system to communicate. For example, in a batch processing system, transaction files are needed to transfer data between processing steps. These files — which can be rather complex — have to be defined, their records of various kinds have to be written and read, and the records have to be deciphered in order to call appropriate procedures. Similar communication software has to be written for almost every kind of infrastructure. Roughly speaking, about 90% of programming effort is devoted to the infrastructure and only 10% to the kernel logic.

This book doesn’t have space to deal with every kind of infrastructure in depth. Indeed, it isn’t the aim of this book to deal with infrastructure at all. But its examples and case study have to be set in some context, so it mainly focusses on a database server with multiple clients. Such an infrastructure shares most of the problems that many other infrastructures have. Even so, it is largely left to the reader to see the underlying unity of systems synthesis and adapt what is said to the reader’s own speciality.

Ideally therefore, the specification of a system should be independent of the technology that it currently uses or of any technology that it might use in the future. This will free the systems analyst from being constrained by what technology currently exists, while allowing maximum freedom in implementation. System synthesis is therefore the act of combining the specification with a suitable technological infrastructure. Synthesis is 10% inspiration and 90% perspiration.

A specification will consist of two parts: a description of its state, or database , and a description of the events that can cause its state to change or can cause it to produce output. Ideally, the kernel specification should avoid all mention of infrastructure. The only time when the specifier must consider infrastructure is when designing the user interface .

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012805304100014X

What provides access to resources and services in a client server network?

In client/server networks, users store their files on a central computer from which files are accessed directly. In the client/server network, the server is the central computer that stores the information, and the client is the computer (and user) that accesses the information from that central computer.

What is meant by client/server computing?

The splitting of an application into tasks performed on separate computers connected over a network. In most cases, the “client” is a desktop computing device (e.g., a PC) or a program “served” by another networked computing device (i.e., the “server”).

What are 3 characteristics of a client server network?

The three-tier client-server architecture consists of a presentation tier known as the User Interface layer, an application tier called the Service layer, and a data tier comprising the database server.

Which server would a client server network include to allow users to share software?

A file server is used to host websites on a client/server network.