summaryrefslogtreecommitdiff
path: root/contractor.md
diff options
context:
space:
mode:
authorLars Wirzenius <liw@liw.fi>2020-04-08 10:34:55 +0300
committerLars Wirzenius <liw@liw.fi>2020-04-08 10:34:55 +0300
commit78534bf9d01abc12e883409c542982f0193e5d01 (patch)
tree73de442978b66a21e5075e17ec74c8b5fa89be4c /contractor.md
parentc34a077257e4e9b68ce83bf851d693381b5b9064 (diff)
downloadick-contractor-78534bf9d01abc12e883409c542982f0193e5d01.tar.gz
Change: contractor.md to match actual implementation
Diffstat (limited to 'contractor.md')
-rw-r--r--contractor.md78
1 files changed, 42 insertions, 36 deletions
diff --git a/contractor.md b/contractor.md
index 415328c..ea958a0 100644
--- a/contractor.md
+++ b/contractor.md
@@ -188,16 +188,17 @@ digraph "arch" {
labelloc=b;
labeljust=l;
dev [shape=octagon label="Developer"];
- git [shape=tab label="VCS server"];
+ img [shape=tab label="VM image"];
+ src [shape=tab label="Source tree"];
+ ws [shape=tab label="Exported workspace"];
apt [shape=tab label="APT repository"];
- npm [shape=tab label="NPM repository"];
subgraph cluster_host {
label="Host system \n (the vulnerable bit)";
contractor [label="Contractor CLI"];
- artifacts [shape=tab label="Artifact store \n (directory)"];
subgraph cluster_contractor {
label="Manager VM \n (defence force)";
manager;
+ libvirt;
subgraph cluster_builder {
label="Worker VM \n (here be dragons)";
style=filled;
@@ -208,12 +209,14 @@ digraph "arch" {
}
dev -> contractor;
contractor -> manager;
- git -> manager;
- npm -> manager;
- apt -> manager;
- manager -> guestos;
- guestos -> manager;
- manager -> artifacts;
+ contractor -> guestos;
+ img -> contractor;
+ ws -> contractor;
+ src -> contractor;
+ apt -> guestos;
+ manager -> libvirt;
+ libvirt -> guestos;
+ contractor -> ws;
}
~~~
@@ -241,44 +244,47 @@ This high-level design is chosen for the following reasons:
technologies, although it doesn't do much to protect against
virtualisation or hardware vulnerabilities (**HostProtection**)
+**HOWEVER**, this architecture needs improvements, which will happen
+soon. The current implementation is a proof of concept only.
+
## Build process
The architecture leads to a build process that would work roughly like
this:
-* developer runs command line tool to do a build
-* command line tool boots the manager VM, which starts any services
- and proxies running in the manager VM, and configures networking and
- firewalls
+* the manager VM is already running
+* developer runs command line tool to do a build:
+ `contractor build foo.yaml`
+* command line tool copies the worker VM image into the manager VM
+* command line tool boots the worker VM
+* command line tool installs any build dependencies into the worker VM
+* command line tool copies a previously saved dump of the workspace
+ into the worker VM
* command line tool copies the source code and build recipe into the
- manager VM
-* manager VM retrieves the system image for the worker VM
-* manager VM boots worker VM
-* manager VM provides source code to worker VM
-* manager VM instructs worker VM to perform each build step in the build
- recipe, while monitoring network access and CPU use; if the manager VM
- notices any limits being exceeded, or attempts to access network
- resources other than ones allowed by developer, it will stop the
- worker VM, and report failure to the developer
-* manager VM will retrieve build artifacts from the guest VM and put
- them in an artifact directory so the developer can access them
+ worker VM's workspace
+* command line tool runs build commands in the worker VM, in the
+ source tree
+* command line tool copies out the workspace into a local directory
* command line tool reports to the developer build success or failure
and where build log and build artifacts are
## Implementation sketch
-The manager VM runs Debian stable, and has libvirt to run guest VMs.
-The host and the manager VM are configured to support nested VMs, if
-the host hardware supports it. The manager VM has its networking
-configured so that it can connect to hosts outside itself, but the
-worker VM can only connect to services provided by the manager VM. The
-services provided to the worker VM are an artifact store, and an HTTP
-proxy. (That's the only protocol I know of right now; more can be
-added if need be.)
-
-The artifact store is mounted to the manager VM using 9p. A web
-service in the manager VM serves the files to the worker VM. The
-developer can access the artifact store via their local file system.
+This is the current status, to be improved upon.
+
+The manager VM runs Debian 10 (buster), and has libvirt, and a
+`manager` account. The worker VM is any Debian version, as long as:
+
+* there is a `manager` account with passwordless sudo access, and the
+ manager VM's `manager` can access it via its SSH key
+* there is a `worker` account without sudo access
+
+There are currently no restrictions on network access for the worker
+VM. This will be fixed.
+
+The developer has SSH key access to the manager VM and the command
+line tool uses this to copy files to manager and worker, and from
+worker, and to control the worker VM.
# Acceptance criteria