What would be the two most suitable device to transfer a file from home to school

Understanding the Technology

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

USB Flash Drives

USB flash drives are small, portable storage devices that use a USB interface to connect to a computer. Like flash memory cards, they are removable and rewritable, and have become a common method of storing data. However, whereas flash memory cards require a reader to be installed, USB flash drives can be inserted into the USB ports found on most modern computers. The storage capacity of these drives ranges from 32MB to 64GB.

USB flash drives are constructed of a circuit board inside a plastic or metal casing, with a USB male connector protruding from one end. The connector is then covered with a cap that slips over it, allowing the device to be carried in a pocket or on a key fob without worry of damage. When you need it, you can insert the USB flash drive into the USB port on a computer, or into a USB hub that allows multiple devices to be connected to one machine.

USB flash drives often provide a switch that will set write protection on the device. In doing so, any data on the device cannot be modified, allowing it to be easily analyzed. This is similar to the write protection that could be used on floppy disks, making it impossible to modify or delete any existing data, or add additional files to the device.

Although USB flash drives offer limited options in terms of their hardware, a number of flash drives will come with software that you can use for additional features. Encryption may be used, protecting anyone from accessing data on the device without first entering a password. Compression may also be used, allowing more data to be stored on the device. Also, a number of programs are specifically designed to run from a USB flash drive rather than a hard disk. For example, Internet browsers may be used that will store any history and temporary files on the flash drive. This makes it more difficult to identify a person's browsing habits.

USB flash drives have been known by many other names over the years, including thumb drive and USB pen drive. Because these devices are so small, they can be packaged in almost any shape or item. Some USB flash drives are hidden in pens, making them unidentifiable as a flash drive, unless one pulled each end of the pen to pop it open and saw the USB connector. The pen itself is completely usable, but contains a small flash drive that is fully functional. This allows the device to be hidden, and can be useful for carrying the drive around with you…unless you tend to forget your pen in places, or work in a place where others walk off with your pens.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000042

Cryptography

Jason Andress, in The Basics of Information Security (Second Edition), 2014

Data security

A great many solutions exist for protecting data at rest. The primary method we use to protect this type of data is encryption, particularly when we know that the storage media, or the media and the device in which it is contained, will be potentially exposed to physical theft, such as on a backup tape or in a laptop.

An enormous number of commercial products are available that will provide encryption for portable devices, often focused on hard drives and portable storage devices, including products from large companies such as McAfee (presently owned by Intel), Symantec, and PGP (presently owned by Symantec), just to name a few. The features of such commercial products often include the ability to encrypt entire hard disks, known as full disk encryption, and a variety of removable media, as well as centralized management and other security and administrative features. There are also a number of free and/or open source encryption products on the market, such as TrueCrypt,4 BitLocker,5 which ships with some versions of Windows, dm-crypt,6 which is specific to Linux, and many others.

We also need to be aware of the location where data of a sensitive nature for which we are responsible is being stored and need to take appropriate measures to ensure that it is protected there.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007440000051

Domain 7

Eric Conrad, ... Joshua Feldman, in Eleventh Hour CISSP® (Third Edition), 2017

Forensic Media Analysis

In addition to the valuable data gathered during the live forensic capture, the main source of forensic data typically comes from binary images of secondary storage and portable storage devices such as hard disk drives, USB flash drives, CDs, DVDs, and possibly associated cellular (mobile) phones and mp3 players.

Fast Facts

Here are the four basic types of disk-based forensic data:

Allocated space: portions of a disk partition that are marked as actively containing data.

Unallocated space: portions of a disk partition that do not contain active data. This includes portions that have never been allocated, as well as previously allocated portions that have been marked unallocated. If a file is deleted, the portions of the disk that held the deleted file are marked as unallocated and made available for use.

Slack space: data is stored in specific-sized chunks known as clusters, which are sometimes referred to as sectors or blocks. A cluster is the minimum size that can be allocated by a file system. If a particular file, or final portion of a file, does not require the use of the entire cluster, then some extra space will exist within the cluster. This leftover space is known as slack space; it may contain old data, or it can be used intentionally by attackers to hide information.

“Bad” blocks/clusters/sectors: hard disks routinely end up with sectors that cannot be read due to some physical defect. The sectors marked as bad will be ignored by the operating system since no data could be read in those defective portions. Attackers could intentionally mark sectors or clusters as being bad in order to hide data within this portion of the disk.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128112489000073

Domain 9

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Second Edition), 2012

Forensic Media Analysis

In addition to the valuable data gathered during the live forensic capture, the main source of forensic data typically comes from binary images of secondary storage and portable storage devices such as hard disk drives, USB flash drives, CDs, DVDs, and possibly associated cellular phones and mp3 players. The reason that a binary or bitstream image is used is because an exact replica of the original data is needed. Normal backup software will only capture the active partitions of a disk and, further, only that data marked as allocated. Normal backups could well miss significant data, such as data intentionally deleted by an attacker, so binary images are used. In order to more fully appreciate the difference between a binary image and a normal backup, the investigator needs to understand the four types of data that exist.

Allocated space—Portions of a disk partition that are marked as actively containing data.

Unallocated space—Portions of a disk partition that do not contain active data. This includes memory that has never been allocated, and previously allocated memory that has been marked unallocated. If a file is deleted, the portions of the disk that held the deleted file are marked as unallocated and available for use.

Slack space—Data is stored in specific size chunks known as clusters. A cluster is the minimum size that can be allocated by a file system. If a particular file, or final portion of a file, does not require the use of the entire cluster, then some extra space will exist within the cluster. This leftover space is known as slack space; it may contain old data or can be used intentionally by attackers to hide information.

“Bad” blocks/clusters/sectors—Hard disks routinely end up with sectors that cannot be read due to some physical defect. The sectors marked as bad will be ignored by the operating system since no data could be read in those defective portions. Attackers could intentionally mark sectors or clusters as being bad in order to hide data within this portion of the disk.

Given the disk-level tricks that an attacker could use to hide forensically interesting information, a binary backup tool is used rather than a more traditional backup tool that would only be concerned with allocated space. There are numerous tools that can be used to create this binary backup, including free tools such as dd and windd, as well as commercial tools such as Ghost (when run with specific non-default switches enabled), AccessData® FTK, or Guidance® Software EnCase®.

Learn by Example: Live Forensics

Although forensics investigators traditionally removed power from a system, the typical approach now is to gather volatile data. Acquiring volatile data is called live forensics, as opposed to the post mortem forensics associated with acquiring a binary disk image from a powered-down system. One attack tool stands out as having brought the need for live forensics into full relief.

Metasploit is an extremely popular free and open source exploitation framework. A strong core group of developers led by HD Moore have consistently kept it on the cutting edge of attack techniques. One of the most significant achievements of the Metasploit framework is the modularization of the underlying components of an attack. This modularization allows exploit developers to focus on their core competency without having to expend energy on distribution or even developing a delivery, targeting, and payload mechanism for their exploit; Metasploit provides reusable components to limit extra work.

A payload is what Metasploit does after successfully exploiting a target; Meterpreter is one of the most powerful Metasploit payloads. As an example of some of the capabilities provided by Meterpreter, Figure 10.4 shows the password hashes of a compromised computer being dumped to the attacker's machine. These password hashes can then be fed into a password cracker that would eventually figure out the associated password. Or the password hashes might be capable of being used directly in Metasploit's PSExec exploit module, which is an implementation of functionality provided by the SysInternal® (now Microsoft) PSExec, but bolstered to support pass-the-hash functionality. Information on Microsoft's PSExec can be found at http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx. Further details on pass-the-hash techniques can be found at http://oss.coresecurity.com/projects/pshtoolkit.htm.

In addition to dumping password hashes, Meterpreter provides such features as:

Command execution on the remote system

Uploading or downloading of files

Screen capture

Keystroke logging

Disabling the firewall

Disabling antivirus

Registry viewing and modification (as seen in Figure 10.5)

And much more, as Meterpreter's capabilities are updated regularly

In addition to the above features, Meterpreter was designed with detection evasion in mind. Meterpreter can provide almost all of the functionalities listed above without creating a new file on the victim system. Meterpreter runs entirely within the context of the exploited victim process, and all information is stored in physical memory rather than on the hard disk.

Imagine an attacker has performed all of the actions detailed above, and the forensic investigator removed the power supply from the compromised machine, destroying volatile memory. There would be little to no information for the investigator to analyze. The possibility of Metasploit's Meterpreter payload being used in a compromise makes volatile data acquisition a necessity in the current age of exploitation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499613000108

Pod Slurping

Brian Anderson, Barbara Anderson, in Seven Deadliest USB Attacks, 2010

Publisher Summary

This chapter investigates the “pod slurping” fiasco. It attempts to uncover the techniques involved in creating a slurping device, exploring recent advancements, and probing into the preventative aspects one should consider. The iPod is just one example of a convenient contraption that can be cleverly crafted into a portable snatcher. Any portable storage device can be used to slurp data from an unsuspecting host. iPhones, Blackberries, PDAs, cameras, flash drives, and mobile phones are just a few devices that can be altered to elicit desired information. The theater of this technological war is already saturated with quality tools for crackers, hackers, phrackers, and phreakers. These devices simply provide another stealthy deployment mechanism for an existing arsenal of weapons with portable properties. From a corporate standpoint, expulsion of these devices entirely could contradict the outcome it is intended to provide. Mobile phones and other memory-based gadgets are entrenched as an essential part of the enterprise and our daily lives. A sudden policy change enforcing their banishment could decrease morale, spike interest, or even lead to disgruntled behaviors. The lines between what can be beneficial or detrimental are twisting together more than ever. It is becoming increasingly difficult to determine which of the latest improvised illusions actually pose a true hazard. Adaptations of these attacks are evolving with increasing velocity, and the best thing we can do is constantly strive for enhanced awareness.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495530000068

Computers and Effective Security Management1

Charles A. Sennewald, in Effective Security Management (Fifth Edition), 2011

PC Hardware

PCs are manufactured in both desktop and smaller portable models. A portable computer may be a laptop; a slightly smaller notebook; or an even smaller handheld or pocketsize device, including netbooks, tablet computers with touch screens, and personal data assistants (PDAs). Computers are also found in cellular or mobile telephones.

PCs basically consist of the central processing unit (CPU), the memory, and peripherals such as printers and other devices.

Central Processing Unit

The CPU is a microprocessor that accepts digital data and carries out instructions. It manipulates these data and stores them in memory before outputting the results. Under the control of the operating system (e.g., DOS, Windows, Mac OS), the CPU manages the other hardware components of a PC.

Memory

The CPU uses instructions permanently held in read-only memory (ROM) chips and carries out a particular function by accessing information temporarily held in random-access memory (RAM).

Data are permanently stored in two locations:

1.

Hard drive: This is usually located within the computer. Some of the more compact computers use flash drives that have no moving parts. Flash memory packs a lot of storage into a small package.

2.

Portable devices: Floppy drives are rarely seen in modern computers. They were replaced by CD and DVD drives. With the addition of the Universal Serial Bus (USB) port and the advances made in flash memory, the USB Flash Drive makes data storage and transfer extremely convenient. The smallest notebooks and tablets have no CD or DVD drive; instead they rely on USB drives. Figure 21–1 shows various portable storage devices.

What would be the two most suitable device to transfer a file from home to school

Figure 21–1. Various Portable Data Storage Device.

Peripherals

Peripherals are devices attached to or built into the PC. They include a keyboard; mouse; trackball; joystick; optical scanner; digital camera; video camera; microphone; and various tablets, pads, and screens that are touch- or light-sensitive. Other peripheral devices are for outputting information, which include a display screen, speaker, and printer. A data communication device is often included, and it has both input and output capabilities.

Networks

PCs can be connected together to form a local area network (LAN) or a wide area network (WAN) and may have connections to the Internet. Usually the networked computers are not centrally controlled, although some specialized utility software programs can be used to allow an authorized person to control another person’s computer via a network.

A LAN connects two or more PCs, often called workstations, to each other and sometimes to a minicomputer, a mainframe, or a common printer. The main computer that serves all other personal computers in the LAN is called the server.

The Internet is a worldwide connection of commercial and noncommercial computer networks tied together by telecommunication resources. The collection of protocols that allows computers to transmit data across the Internet is called Transmission Control Protocol/Internet Protocol (TCP/IP). IP requires every computer on the Internet to have an individual address.

When a stream of data is transmitted, TCP software breaks it into manageable “packets” of digital information and numbers each packet in order. The IP address of the destination computer is marked on each of these numbered packets and they are dispatched. Packet switchers, or routers, are computers on the Internet that read the IP address on each packet and expeditiously direct each packet to its destination. When the packets arrive at their final destination, TCP software checks that they are present and then reassembles them into the original email message. The word Internet came to mean both the network itself and the protocols that governed communication across the network.

An intranet is a private Internet within a corporation or organization. Whether a corporate network uses a LAN or a WAN, it can be designed to function with Web documents that can be accessed only by the private network users.

Wireless networks are becoming increasingly common, and offer greater convenience as well as greater vulnerabilities. Wireless security measures need to be carefully selected and regularly updated, most likely by an information security specialist.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123820129000216

Portable Virtualization, Emulators, and Appliances

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

MokaFive

Like MojoPac, MokaFive began as a virtual environment that could be run from a portable device. It is a spin-off from Stanford University's Computer Science Department and was founded in 2005. Stanford's portable virtual machine research started in 2001 and was based on the thin-client computing model. The original group that was led by Dr. Monica Lam consisted of four PhDs from Stanford University and industry professionals. MokaFive's product called LivePC was introduced in September 2006. It came in two versions: for Windows and for bare-metal installations. LivePC depends on VMware Player. It imported existing LivePCs or created new ones using a very basic .vmx generator. MokaFive technology advances used the MokaFive Engine technology to permit LivePCs to launch without needing to start MokaFive Engine. By simply dragging and dropping LivePC shortcuts to the desktop or start menu, Live PCs were able to launch, making it possible to have the LivePC links automatically download MokaFive Engine. Around the same time, password protection was added for the MokaFive Engine.

In September 2007, IronKey and MokaFive partnered to deploy virtual desktop solutions on an encrypted USB flash drive. In April 2008, MokaFive Engine became MokaFive Player. The virtual environment is still called a LivePC. LivePCs contain everything needed to run a virtual computer: an OS and a set of applications. LivePCs can be run from a USB flash drive, USB hard disk, iPod, or desktop computer. At that time, MokaFive introduced a solution, which basically is a console to centralize the management of multiple LivePCs.

The idea behind MokaFive is to allow consumers to carry virtual PCs on a portable storage device so that a user can walk up to any Windows PC and plug-in a device such as an iPod, launch MokaFive, and run Damn Small Linux (DSL) in a window. MokaFive Player, which actually is a VMware Player, runs the LivePCs and can be downloaded or distributed and preloaded on a USB device, phone, or laptop. The LivePC can automatically be backed up while the USB drive is plugged into a computer. Premade LivePCs are downloadable from MokaFive's Web site. This is similar to the VMware and Parallels virtual appliance concept that will be discussed in the next section.

A LivePC can be downloaded from a public repository of LivePC images created by both MokaFive and outside entities. Figure 4.5 shows MokaFive's LivePC Library.

What would be the two most suitable device to transfer a file from home to school

Figure 4.5. MokaFive's LivePC Library

The technology can stream and prefetch LivePCs, so they can be shared. LivePCs are always live. This means that users automatically receive updates to the LivePCs as the maintainers make changes. With MokaFive Player, users can run multiple LivePC images irrespective of the underlying OS. It also provides complete isolation from the underlying OS, so users can run corporate LivePC images securely on any personal or unmanaged device.

As the MokaFive Player starts on a computer, a configuration check is done to determine if the OS and hardware meet the requirements to run the player, as shown in Figure 4.6.

What would be the two most suitable device to transfer a file from home to school

Figure 4.6. MokaFive's System Configuration Check

After MokaFive Player is finished loading, the user screen loads. At this point, the user can load an already downloaded LivePC, add a new LivePC, or change the settings, as shown in Figure 4.7.

What would be the two most suitable device to transfer a file from home to school

Figure 4.7. MokaFive's Settings

It is in this area where the LivePC startup and network and backup settings can be set. For example, the user can configure the LivePC that starts automatically or only starts upon entering a password, as shown in Figure 4.8.

What would be the two most suitable device to transfer a file from home to school

Figure 4.8. MokaFive's Startup Settings

At this point, the LivePC will check for updates or download the default LivePC, which is Linux XP Desktop, for the version used in this book (1.7.0.20942). This is shown in Figure 4.9.

What would be the two most suitable device to transfer a file from home to school

Figure 4.9. LivePC Updating

Once the LivePC is updated, it is ready to run. Figure 4.10 shows the Linux XP Desktop running.

What would be the two most suitable device to transfer a file from home to school

Figure 4.10. Linux XP Desktop Running as the User Sees It

MokaFive has entered the enterprise virtual desktop arena, just as MojoPac has.

MokaFive Suite is a Desktop-as-a-Service (DaaS). Chapter 10, “Cloud Computing and the Forensic Challenges,” discusses DaaS in further detail. It is basically a platform with desktop management capabilities, so Information Technology (IT) administrators can centrally create, deliver, secure, and update a fully contained virtual environment and then make it available to a large number of users. Users download their secure virtual desktop via a Web link and run it locally on any Mac or Windows computer. MokaFive Creator is the authoring tool that enables IT administrators to create the LivePC image; MokaFive Enterprise Server integrates with standard corporate infrastructure, and the Web-based MokaFive Console allows administrators to centrally manage, distribute, and control LivePC images.

According to MokaFive's Web site, MokaFive Service is a hosted desktop service designed for small- to medium-sized organizations with up to 500 users. This solution is a hosted service, where the organization receives a dedicated secure account within the MokaFive infrastructure. Then, IT administrators create and upload user information along with customized corporate LivePC images to MokaFive's server. The images are managed via MokaFive's Web-based console.

This environment has been implemented by companies such as Sheridan Institute of Technology and Advanced Learning, Panasonic, and healthcare facilities.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000047

The structural environment

Gillian Oliver, in Organisational Culture for Information Managers, 2011

Regional technological infrastructure

The capability and capacity of the organisation’s internal information technology systems will of course impact on the nature of information management services. But it is important to take a step back and consider the broader environmental context, because this dictates the opportunities and constraints that are the parameters for internal development. In addition, regional capabilities will influence the extent to which employees are restricted to working within organisational boundaries, their acceptance of organisational limitations and/or need to develop alternative solutions.

Today’s social networking, or Web 2.0, environment offers a huge range of possibilities for alternative ways of working. When it is no longer necessary to be in a designated physical space in order to work, a whole range of issues that may make life much more complicated for information managers arise. People are able to log into their work systems such as e-mail and conduct business from remote locations. Voice-over-internet protocol software such as Skype enable realtime global communication. Computing in the Cloud, using cyberspace as a vast open-plan office and storage repository opens up possibilities for collaborative working alongside issues that are only now emerging, such as ownership and responsibility for data.

In addition, portable storage devices enable employees to carry vast amounts of data off work premises, and also of course raise huge risks in terms of loss and unauthorised access to information.

Underlying people’s ability to take advantage of these features is the need for adequate equipment, power and access to information technology. There will of course be huge contrasts in the facilities available in western developed countries from those in third world and developing countries, on the other side of the digital divide. But some features of the Web 2.0 environment may enable digital working in ways not previously possible in countries without a robust technological infrastructure. Contrasts are also apparent among countries in the developed world, largely due to telecommunications policy and the associated infrastructure.

Free wireless internet makes a huge difference to the extent to which the flexibility of communication will be taken advantage of, and incorporated into working outside the bricks and mortar of traditional organisational boundaries.

Also, of course, it is important to bear in mind that if the regional technological infrastructure is one which facilitates the use of these new tools and features, organisational policy should reflect this reality. Otherwise the result may well be sabotage of internal information management systems. Whether this sabotage is intentional or accidental, the result will be the same as people develop their own workarounds.

I encountered an instance of the complexity of this new environment when consulting for a public sector organisation based on one of the island nations of the South Pacific. Internet bandwidth was limited, and an attempt to develop an online document repository that could be accessed remotely failed because download times were excessive. The organisation focused on developing approaches to restrict internet usage where possible, in order to make better use of the bandwidth that was available.

One of the results of this strategy was to ban the use of Skype, as this was viewed as software which presented an excessive drain on bandwidth. Furthermore, Skype was perceived as a tool that was used for social, rather than work purposes. However, employees had developed their own communication practices as an integral feature supporting and facilitating their work, and Skype played a central role.

The functions of this organisation meant that employees travelled a great deal, largely to Pacific island nations that were even more remote. One of the primary purposes of travel was to take part in meetings which involved high-level negotiation of policy spanning different jurisdictions. Skype was used wherever possible as a quick and easy way of checking back with colleagues and managers, especially to ensure that any decisions or positions taken out of the office were in keeping with home organisational policy. When Skype could no longer be used, employees developed their own workaround. This, in effect, consisted of making sure that the organisation’s information could be accessed externally. Staff established their own personal e-mail accounts in the cloud, for instance setting up hotmail or gmail accounts. Then before travelling, they would e-mail themselves copious quantities of documents to make sure they could have them to hand if needed. This was in addition to using portable memory devices such as USB sticks, as people had found from experience that these small devices could easily be lost or left in someone else’s computer. All in all, it was no understatement to say that the organisation’s most important information resources were exposed to significant risks, largely due to insufficient attention being paid to understanding how information technology could be used to support work practices.

The regional technological infrastructure plays such a key role in shaping and influencing information management practices. It must be taken into account when developing policies, and that involves keeping up to date with the latest social networking tools and finding out how they are used within organisations. It is in many ways much easier to take a draconian approach to limiting the use of such tools, or banning the use of portable memory sticks, but the end result of such actions is highly unlikely to be beneficial to organisational information management.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346500500035

Generating Crisis Maps for Large-scale Disasters: Issues and Challenges*

Partha Sarathi Paul, ... Sajal Das, in Wireless Public Safety Networks 3, 2017

4.3 Proposed solution in a nutshell

In [SAH 15], we proposed a generalized architecture that enables a post-disaster communication network for catering a large coverage area. As a case study, we selected Sundarbans, West Bengal, India for a comparative analysis. On simulation-based comparison results with different cheaper wireless technologies and devices, we found that our proposed architecture would perform better than the other alternatives. In [PAU 15], we have presented a naive implementation of the architecture as a lab-scale testbed, performance of which establishes the proposed architecture as a decent alternative as challenged situation communication strategy.

4.3.1 Multi-tier hybrid architecture for post-disaster communication

Building an expert communication system with limited resources highly demands a proper planning about the usage of network resources. To explain the obviousness of a latency-aware hybrid network architecture for a challenged network, a comparative analysis has been made (Figure 4.4) with simulation results based on some simulation parameters as shown in Figure 4.5 of four restoration steps as shown in Figure 4.6 with their drawbacks.

What would be the two most suitable device to transfer a file from home to school

Figure 4.4. Utilization of various technologies for four different scenarios

What would be the two most suitable device to transfer a file from home to school

Figure 4.5. Simulation parameters for four different scenarios

What would be the two most suitable device to transfer a file from home to school

Figure 4.6. Simulation under four different scenarios. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

At the first step, no measurement has been taken after disaster as shown in Figure 4.6(a). Entire disaster hit area called Affected Area (AA) is divided into small zones with human belonging PDA devices, basically forms DTN with victims gathering at any public place; denoted as Shelter Points (SP) (it may be bus stoppages, gram panchayat office, school buildings, etc.) in our design. DTN-enabled PDA devices are moving randomly around the shelter point. In this case, some vehicles acting as Data Mule (DM) move among the SPs along the pathways in a random manner.

In the second step, one dropbox (DB) is placed at each SP to collect packets destined towards a fixed control center, i.e. Master Control Station (MCS) from DTN nodes and transmit them to data mules (DM) (Figure 4.6(b)). Here DTN-enabled smartphones are come in range of DB after certain interval. In this case also, the rescue vehicles (DM) are moving among the SPs along the pathways in random manner. In both the cases, we obtained lower delivery probability as shown in Figure 4.7.

What would be the two most suitable device to transfer a file from home to school

Figure 4.7. Cumulative delivery probability for four different scenarios. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

In the third step, each SP has a dedicated DM with predefined unique trajectory (Figure 4.6(c)) which will move from DB to MCS. For this case, we are also getting poor performance as observed in Figure 4.7. To allow a dedicated data mule for each zone and assuming to have good pathways for all those devices to reach the MCS traversing such long distance are almost impractical after disaster.

The infeasible presumption and unsatisfactory performance of the third step compels us to set up a long-range infrastructure, i.e. some long-range communication units like long-range WiFi (LWC) which are placed initially by the brute-force method in the fourth step, creating a connected network for communication which was not possible practically by DMs. But selecting a best site to deploy these LWCs claims to be challenging while maintaining its line-of-sight feature. By simulating the aforesaid non-strategic approach, an unsatisfactory result enforces us to place the long-range LWCs with strategic manner by bringing forth our proposed architecture as shown in Figure 4.6(d).

The observation from the above simulation results motivated us to propose a multi-tiered hybrid solution in [SAH 15] using low-cost portable storage and communication devices that may be adopted to implement communication systems for post-disaster situations. In the proposed architecture, we claimed to use the following: (a) a pool of smartphones termed as DTN nodes that may function in opportunistic mode; (b) a pool of stationary devices termed as Information Drop Box (IDB) that is capable of storing and communicating information opportunistically; (c) a pool of vehicles (cars, boats, copters or UAVs) termed as Data Mules (DM) that are attached with specialized devices to store, carry and transfer information in an opportunistic mode; (d) a pool of portable and easily deployable (WiFi/WiMax) devices for a long-range communication termed as Long-range WiFi (LWC) that may be mounted at selected places. Figure 4.8 presents a schematic view of the hybrid opportunistic network architecture proposed in the aforementioned paper. Through extensive simulation of the proposed architecture using ONE simulator, we proved that the proposed architecture has the capacity of achieving nearly cent percent packet delivery rate within legitimate delay if some systematic approach is followed to deploy the corresponding system in the target area. We also proposed an empirical strategy of deployment of the same; however, there may be scope for improvement in the plan.

What would be the two most suitable device to transfer a file from home to school

Figure 4.8. A four-tier hybrid architecture for post-disaster communication. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

4.3.2 Implementation & testbed

In [PAU 15], we have presented a naive implementation of the architecture presented in formerly mentioned paper for implementing a lab-scale testbed for the delay-tolerant network research using low-cost devices. A schematic view of the said testbed implemented around our institute may be found in Figure 4.9.

What would be the two most suitable device to transfer a file from home to school

Figure 4.9. Diagrammatic view of the PDM testbed installed in and around our institute. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

In the said testbed, we used android-enabled smartphones as DTN nodes, laptops running Ubuntu Linux as IDBs, custom-built devices as DMs in which Raspberry Pi6 running ARM Linux is used as computing unit, WiFi dongle is used for communication interface and power bank is used for power source. We may refer to Figure 4.10 for finding visuals for the custom-built device. In addition to the above, we used a pair of long-range WiFi devices mounted on guyed mast as LWC units. For seamless data transfer across such a wide variety of devices of heterogeneous nature and platform, we used a peer-to-peer file syncing tool called BitTorrent Sync.7 The results in the aforementioned paper show more-or-less good data transmission performance over the network, which proves that the corresponding implementation of the proposed strategy of [SAH 15] has good hope in real-life application.

What would be the two most suitable device to transfer a file from home to school

Figure 4.10. Device components used in our implementation of four-tier hybrid architecture for a post-disaster communication

4.3.3 Software suite

A major challenge in developing the proposed system is to design and implement a software suite that would function seamlessly across the pool of devices mentioned above which are heterogeneous in genre and platform. Due to our assumption that the conventional communication system may be snapped during large-scale disasters, use of conventional TCP/IP protocol suite is not viable. We choose to use the DTN bundle protocol as the underlying networking technology. As protocol suite for the same, we may think of bundle protocol specification which is meant for delay-tolerant networks. Though bundle protocol gives a reliable way of communication between two devices, there are several issues that need to be addressed. Bundle protocol converts data into blocks of fixed size, called the “bundles”. Here the question is, how to convert a file to bundles? In our case, it may be noted that files are of different sizes and will be transmitted from varying devices and platforms. Secondly, bundle protocol specification mentions nothing regarding the underlying communication framework and file transfer protocol above that. In our present study, we are about to present a protocol suite in a novel manner that would cater a wide class of devices and platforms communicating over a variety of wireless communication medium and provide some essential post-disaster management-related services seamlessly.

For developing the software suite, we use a protocol stack as shown in Figure 4.11, which will be implemented as part of the application layer of conventional TCP/IP stack. The bottommost layer of the proposed protocol stack deals with the communication interface through which the target devices would be connected. As viable communication alternatives, we tried multiple options. Most convenient way for establishing network communication among devices of heterogeneous nature and platform would have been WiFi ad-hoc. However, due to its power-hungry nature, it has been made obsolete by Google Inc. for android devices and will not be available near future [TRI 11] as a native application. However, the devices can be rooted/jail-broken to install WiFi ad-hoc interface into most of the standard smartphones through the use of CyanogenMod,8 an open-source operating system for smartphones and tablet PCs based on the Android mobile platform. The alternative solution provided for android devices, WiFi Direct, is not yet stable and suffers frequent crash and connection failures for heterogeneous devices.9 The third option is to use the native WiFi AP mode available in the devices in an intelligent manner to connect themselves to each other, the detail mechanism adopted would be described in a later section. Finally, any other available communication interface like GSM/3G may also be used, if available.

What would be the two most suitable device to transfer a file from home to school

Figure 4.11. Architecture for the proposed software system

On top of the underlying connection interface, a P2P sync application is used to synchronize user data across the nodes engaged in the connection. The P2P sync tool selected would take into account the users’/devices’ role (authority level) in the system and the priority level of data files during process of file synchronization. The aforesaid role-based file synchronization enables the users to receive the files according to their role in the system in order of priority level of the files.

4.3.4 Working principle of the proposed system

Our objective in the present chapter is to propose a smart end-to-end information management system and associated strategies for information propagation and management to be used for rescue planning and relief need analysis. Such a system would ensure a persistent communication that may survive even after a large-scale disaster, would connect the affected community to the disaster managers and the outside world as a whole and would also take into account the issues related to partial and out of synchronization information from the field. Having set up such an information support system in action, the first responders synchronize their works with co-workers, and may plan the evacuation route for the victims from ground zero10 and rescue-relief officials could efficiently plan the relief operations to best serve the victims to reduce their sufferings. Using our proposed system, the local and the state administration could collect crisis information from the stakeholders in order to pinpoint the rescue-relief need of the affected community. In a nutshell, our proposed system is designed to provide an alternative communication infrastructure with minimum possible investment towards the additional infrastructure that would survive even after a large-scale disaster, would ensure exchanging crisis-related information among the stakeholders at their maximum convenience, and as a by-product, would provide the stakeholders with a consistent view of the crisis situation and resource (human or material) demand and supply in an user convenient visual form maintaining their level of authority through mining and aggregating automatically the crisis information present in the system preserving the source anonymity. The proposed system comprises of a smart phone application, Surakshit (presently designed for Android platforms only, likely to be extended to other platforms in future) and a custom-built hardware suite, XOB x.1, precisely made for this purpose.

Using the following simple example scenario, we would like to demonstrate how our system would work. Modern smartphones could exchange media files with one another when they are in proximity, and many popular smartphone media file sharing applications (SHAREit, Xender, etc.) are developed based on that principle. Having installed Surakshit in his/her smartphone, a user could collect and exchange information through a convenient user interface. The messages will be forwarded automatically by the system to other users devices (users having Surakshit installed in their smartphones) whenever one comes in communication range of it, which in turn forward to some other devices until the message finds its final destination. The users of non-destination devices, which receive and retransmit the messages to the next devices, neither are interrupted nor could intercept the messages they receive. To strengthen and speed up the above mentioned store-carry-forward method of message propagation by the smartphones, XOB x.1 devices may be placed/preinstalled at strategic locations (Information Drop Box (IDB)), as well as associated with vehicles (cars/boats/copters/UAVs) which are then called Data Mules (DM). The resulting system may be visualized as one similar to a Digital Postal System. The IDBs collect and store the messages from the users that visit them like local post boxes. The DMs collect the messages from IDB and carry them to users like Mail Vans when they are far apart. The process may be explained pictorially through Figure 4.12.

What would be the two most suitable device to transfer a file from home to school

Figure 4.12. An example scenario for user interactions during the trial. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

In Figure 4.12(a), the circles show approximate ranges of a cluster of users who are moving in an area represented by the underlying map view. Whenever a couple of users come in proximity, they form a small personal network and exchange messages seamlessly (refer Figure 4.12(b)). From Figure 4.12(c), it may be observed how all the four users in proximity of each other sharing a common region, and could form a larger personal network for sharing information.

To further illustrate the whole operation, we consider an example node interaction scenario as shown in Figure 4.13, where we have shown propagation of a message originated by user 1 and destined for user 6. Within the region shown in Figure 4.13(a), we may find five (05) users (having Surakshit in their respective smartphones) and a static XOB x.1. In Figure 4.13(b), we find user 1 meeting users 2 and 3, thus forwarding message to both of them. After a while, user 2 visits XOB x.1 and stores the message in the box. User 3 as well has the opportunity of meeting user 5, and thus forwards the message to user 5. Thus, as we observe in Figure 4.13(c), users 1, 2, 3 and 5 and the XOB x.1 have the replicas of the same message, and whoever has the opportunity of meeting with user 6 will deliver the message to it, which we find, as shown in Figure 4.13(d), to be XOB x.1 in the present scenario.

What would be the two most suitable device to transfer a file from home to school

Figure 4.13. An example scenario for user interactions for illustrating the working principle. For a color version of the figure, see www.iste.co.uk/camara/wireless3.zip

In the way described above, the nodes exchange information among themselves. The software suite in the proposed system has the capability of exchanging information seamlessly without user intervention. User 1 in the previous example have created the message and instructed the system in his device to send the messages to user 6. The system determines the forwarding path intelligently, and eventually delivers the message to the device of user 6 when the first opportunity is found. More the contact opportunity among the users, faster the message delivery to the destination. Longer area coverage and faster delivery thereof may be achieved when IDBs are stationed at strategic locations where users often visit, and DMs are used to move messages across them.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781785480539500044

Which device is most frequently used for transferring data and why?

A Flash Drive is a plug-and-play portable storage device that uses non-volatile memory to store data. It is very easy to store, retrieve and manage data on a flash drive. The main purpose of flash drive is to carry and transfer files from one device to another.

What allows files to be transferred between 2 computers?

Using File Transfer Software All you have to do is to download the file transfer software and install it on both the computers. Software like Send Anywhere, SHAREit, are pretty decent options for this method. Install and launch the software on both computers.

How is data transferred from one device to another device?

There are two methods used to transmit data between digital devices: serial transmission and parallel transmission. Serial data transmission sends data bits one after another over a single channel. Parallel data transmission sends multiple data bits at the same time over multiple channels.

What storage device can you recommend to use as a student?

USB memories USB flash drives can have different memory capacities. It is advisable to buy one with a high enough capacity (say, 16 GB).