About Plush



Figure 1.

The main components of Plush are an application controller that typically runs on a user's workstation and a lightweight client process that runs on all nodes hosting the application. The set of managed clients is not limited to one platform. The same controller has the ability to manage clients across all supported platforms, including PlanetLab, the Grid, and any local clusters maintained at the users' site. Figure 1 depicts this architecture.

The clients, which run on each remote host or resource, execute commands on behalf of the controller on a remote host. For example, the controller could instruct the clients to install some RPM packages or to execute and monitor some process, and the clients would notify the controller when said tasks were completed.

Plush has a bootstrap mechanism that allows clients to be installed on remote hosts as necessary. The bootstrap scripts either download the appropriate binaries from a repository, or it fetches the source code and builds the clients on the remote host. The former mode has been very useful on PlanetLab, where we can automatically install statically-linked clients without installing the necessary libraries or wasting cycles to compile the clients.

The Plush Controller

Figure 2.

The Plush controller runs on desktop workstations, managing the remote clients and providing fault-tolerance. In a typical execution scenario, the controller uses an application description and a list of the available resources (such as the names of PlanetLab slices and other available computers that can be used via SSH), and it uses a resource matcher to select a subset of the resource pool for the application. The controller connects to those nodes, installs a set of user-defined software packages, and copies project files to the nodes. Once a node is ready, Plush configures the processes that will run on the node. All of the nodes synchronize on a barrier, after which the processes are started. Once a process exits, or some other user-defined action occurs signifying the end of the experiment, cleanup actions are executed at the clients. Figure 2 illustrates these steps.


There are several places in Plush to plug in your favorite tool. Here are the places we currently support (simple) extensibility, and some examples of tools that could be used:

Name Description Tools

Host Directory

Enumerates available resources

We support manual enumeration of machines, and discovering resources via the PlanetLab Central Database.

Host Monitor

Host Monitors report on host Quality-of-Service.

CoDeeN, SliceStat, Ganglia, Trumpet, CoMon, and the PlanetLab node sensors

Configuration Matcher

Maps abstract resource requirements to physical resources


Resource Allocator

Gets resource principals for desired resources

Bellagio, Sirius, Resource reservation systems

File Transfer Method

Tools for copying files

scp, wget, rsync, CVS, Bullet, bittorrent

Software Installer

Tools for installing software

yum, rpm, tar, make

Process Monitor

Watches over the status of running processes, calls wait(), etc.

The default monitor collects output and watches for termination. Application-level checks, such as CoDeeN's HTTP queries, could be added.

Process I/O

For every child process, Plush has the ability to grab the process' file descriptors and pre-exec() state.

The default implementation can write files to disk or forward the output to the Plush controller.

One could also do remote/distributed log processing and aggregation here.