The architecture of the proctoring system is shown in the figure below.
The server part consists of the following components:
Use the hardware calculator to calculate the computing power of the server based on the expected load. The server part of the system requires an operating system (OS) based on a 64-bit Linux distribution. Ubuntu, Debian, CentOS or other similar distributions can be used. The proctoring system requires Docker software (software) or other environment to run Docker containers. However, you can use cloud services for DBMS server (MongoDB-compatible) and file storage (S3-compatible).
This is the preferred way to run the system on a single server.
This section of the instructions describes how to run all system components via Docker Compose (see Docker Compose installation instructions).
All settings for running proctoring components (containers) are written in the docker-compose.yml file and environment variables in the .env file. The script for launching containers is designed in such a way that for the specified domain (the DOMAIN_NAME variable) an SSL certificate from Let's Encrypt is automatically issued for 90 days. When containers are restarted, the validity period of the SSL certificate is automatically renewed if less than 15 days remain before its expiration. This is a typical configuration file for running containers:
docker-compose.yml
services:
mongo:
image: public.ecr.aws/s8z4o5f0/mongo:latest
restart: always
networks:
- backend
#ports:
# - "0.0.0.0:27017:27017/tcp"
volumes:
- db:/data/db
- configdb:/data/configdb
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_SECRET:-secret}
healthcheck:
test: [ "CMD", "curl", "--connect-timeout", "10", "--silent", "http://127.0.0.1:27017" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 10s
minio:
image: public.ecr.aws/s8z4o5f0/minio:latest
restart: always
networks:
- backend
#ports:
# - "0.0.0.0:9000:9000/tcp"
# - "0.0.0.0:9001:9001/tcp"
volumes:
- storage:/data
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: ${MINIO_SECRET:-minioadmin}
MINIO_BUCKET: proctoring
healthcheck:
test: [ "CMD", "curl", "--connect-timeout", "10", "--silent", "http://127.0.0.1:9000/minio/health/live" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 10s
recognizer:
image: public.ecr.aws/s8z4o5f0/recognizer:latest
restart: always
networks:
- backend
#ports:
# - "0.0.0.0:8080:8080/tcp"
environment:
PORT: 8080
API_KEY: ${RECOGNIZER_SECRET:-secret}
healthcheck:
test: [ "CMD", "curl", "--connect-timeout", "10", "--silent", "http://127.0.0.1:8080/ping" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 10s
turn:
image: public.ecr.aws/s8z4o5f0/turn:latest
restart: always
network_mode: host
volumes:
- turn:/var/lib/coturn
environment:
TURN_PORT: 3478
TURN_USER: webrtc:${TURN_SECRET:-webrtc}
TURN_EXTERNAL_IP: ""
TURN_MIN_PORT: 49152
TURN_MAX_PORT: 65535
healthcheck:
test: [ "CMD", "nc", "-z", "127.0.0.1", "3478" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 10s
acme:
image: public.ecr.aws/s8z4o5f0/acme:latest
network_mode: host
volumes:
- acme:/.lego
environment:
ACME_DOMAIN: ${DOMAIN_NAME:-localhost}
ACME_EMAIL: admin@${DOMAIN_NAME:-localhost}
ACME_DAYS: 15
ACME_WEBROOT: /.lego/webroot
node:
depends_on:
mongo:
condition: service_healthy
minio:
condition: service_healthy
recognizer:
condition: service_healthy
turn:
condition: service_healthy
acme:
condition: service_completed_successfully
image: public.ecr.aws/s8z4o5f0/node:latest
restart: always
networks:
- backend
ports:
- "0.0.0.0:80:80/tcp"
- "0.0.0.0:443:443/tcp"
volumes:
- acme:/etc/acme
environment:
DISTRIB_URL: ${DISTRIB_URL}
DISTRIB_KEY: ${DISTRIB_KEY}
MONGO_URI: mongodb://admin:${MONGO_SECRET:-secret}@mongo:27017/proctoring?authSource=admin
MINIO_URI: http://minioadmin:${MINIO_SECRET:-minioadmin}@minio:9000/proctoring
RECOGNIZER_URI: http://${RECOGNIZER_SECRET:-secret}@recognizer:8080/upload?position=1
TURN_URI: turn://webrtc:${TURN_SECRET:-webrtc}@${DOMAIN_NAME:-localhost}:3478?transport=udp+tcp
SESSION_KEY: ${SESSION_KEY:-secret}
#SSL_KEY: -----BEGIN RSA PRIVATE KEY-----\n...
#SSL_CERT: -----BEGIN CERTIFICATE-----\n...
SSL_KEY: /etc/acme/certificates/${DOMAIN_NAME:-localhost}.key
SSL_CERT: /etc/acme/certificates/${DOMAIN_NAME:-localhost}.crt
PORT: 443
HTTP_PORT: 80
HTTP_WEBROOT: /etc/acme/webroot
MANAGER_USERNAME: ${MANAGER_USERNAME:-manager}
MANAGER_PASSWORD: ${MANAGER_PASSWORD:-changeme}
MANAGER_HOST: ${DOMAIN_NAME}
healthcheck:
test: [ "CMD", "curl", "--connect-timeout", "10", "--silent", "http://127.0.0.1/ping" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
networks:
backend:
volumes:
db:
configdb:
storage:
turn:
acme:
The .env file contains environment variables in the format VARIABLE=VALUE, which are described in the table below:
ATTENTION: Environment variables MONGO_SECRET, MINIO_SECRET, TURN_SECRET and RECOGNIZER_SECRET must not contain special characters, only the following characters are allowed: a-zA-Z0-9
You can create and run all containers at once with the following command from the directory where the docker-compose.yml file is located:
docker compose up -d
You can stop and remove all containers at once with the following command from the directory where the docker-compose.yml file is located:
docker compose down
You can use this method to manage Docker containers more flexibly without using Docker Compose.
This section of the manual describes how to run all system components in Docker (see Docker installation instructions).
3.2.1. Single server Create a volume on the host where the database files will be stored:
docker volume create db
docker volume create configdb
Run container mongo:
docker run -d --name mongo --network host --restart always \
-v db:/data/db \
-v configdb:/data/configdb \
-e "MONGO_INITDB_ROOT_USERNAME=admin" \
-e "MONGO_INITDB_ROOT_PASSWORD=secret" \
public.ecr.aws/s8z4o5f0/mongo
Environment variables (set in the "-e" parameter):
# database administrator login
MONGO_INITDB_ROOT_USERNAME=admin
# database administrator password
MONGO_INITDB_ROOT_PASSWORD=secret
DB connection string:
mongodb://admin:secret@127.0.0.1:27017/proctoring?authSource=admin
3.2.2. Сluster configuration
Create volumes on the host in which the database files and configs will be stored:
docker volume create db
docker volume create configdb
For MongoDB, the cluster configuration is configured using the Replica Set as one Primary node and one or more Secondary nodes. In order for the nodes of the MongoDB cluster to communicate with each other, the same key must be registered on all cluster nodes. You can generate a key with the command:
KEYFILE=/var/lib/docker/volumes/configdb/_data/mongo.keyfile
if [ ! -e "$KEYFILE" ]
then
openssl rand -base64 756 | tee $KEYFILE
chmod 400 $KEYFILE
fi
Run a mongo container on a replica named "rs0" on each host (for example, mongodb01 and mongodb02):
docker run -d --name mongo --network host --restart always \
-v db:/data/db \
-v configdb:/data/configdb \
-e "MONGO_INITDB_ROOT_USERNAME=admin" \
-e "MONGO_INITDB_ROOT_PASSWORD=secret" \
public.ecr.aws/s8z4o5f0/mongo \
mongod --replSet rs0 --keyFile /data/configdb/mongo.keyfile
Environment variables (set in the "-e" parameter):
# database administrator login
MONGO_INITDB_ROOT_USERNAME=admin
# database administrator password
MONGO_INITDB_ROOT_PASSWORD=secret
Initialize the cluster with the command:
docker exec -it mongo mongosh -u admin -p secret --authenticationDatabase admin
> rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host: "mongodb01:27017" },
{ _id: 1, host: "mongodb02:27017" }
]
}
)
DB connection string:
mongodb://admin:secret@mongodb01:27017,mongodb02:27017/proctoring?authSource=admin&replicaSet=rs0
3.3.1. Local storage MinIO
Create a volume on the host where the repository files will be stored:
docker volume create storage
Run container minio:
docker run -d --name minio --network host --restart always \
-v "storage:/data" \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin" \
-e "MINIO_BUCKET=proctoring" \
public.ecr.aws/s8z4o5f0/minio
Environment variables (set in the -e
parameter):
# login to access the storage
MINIO_ROOT_USER=minioadmin
# password to access the storage
MINIO_ROOT_PASSWORD=minioadmin
# create default bucket
MINIO_BUCKET=proctoring
# server host and port
MINIO_HOST=""
MINIO_PORT=9000
# web console host and port
MINIO_CONSOLE_HOST=""
MINIO_CONSOLE_PORT=9001
# limiting the number of parallel requests
MINIO_API_REQUESTS_MAX=1000
# limiting the timeout for processing a request
MINIO_API_REQUESTS_DEADLINE=1m
Storage connection string:
http://minioadmin:minioadmin@127.0.0.1:9000/proctoring
3.3.2. MinIO storage tiers When storing large amounts of data, in some cases it may be worthwhile to configure two data storage tiers: “hot” storage and “cold” storage. This allows you to use a small amount of “hot” storage with fast disks to ensure correct operation of the system under heavy load, and store archived data in “cold” storage on slower and cheaper disks. At the same time, set up a rule for automatically moving data from “hot” storage to “cold” storage, for example, after 1 day, without losing access to this data through the proctoring system. Below are step-by-step instructions on how to configure two MinIO servers with automatic data transition according to the rule from one server to another:
where <user>
and <password>
are storage access keys.
where <minio1LifecycleAdmin>
is the policy administrator login,
<LongRandomSecretKey>
is the policy administrator password.
where <user>
and <password>
are access keys to the second MinIO server, “proctoring” is the name of the bucket, http://127.0.0.1:9100 is the address of the second MinIO server.
You can read more about setting up storage levels and transfer rules in the official MinIO documentation. This feature requires MinIO Server version RELEASE.2023-03-20T17-17-53Z or higher. If you are using an earlier version, you will need to migrate to a newer version of MinIO Server.
3.3.3. Amazon S3 You can use Amazon S3 as storage, you don't need to install anything extra for this. Storage connection string ("proctoring" bucket):
https://access_key_id:secret_key@s3.eu-west-1.amazonaws.com/proctoring?region=eu-west-1
Note this: "eu-west-1" should be your S3 region that you really use; "access_key_id" and "secret_key" can only be uppercase or lowercase Latin characters and numbers, if not regenerate it; "proctoring" is a bucket name in S3 that must be created before.
3.3.4. Azure Blob Storage You can use Microsoft Azure Blob Storage as storage, but you need to use MinIO Gateway to work with it. Please note that Gateway support has been removed in the latest versions of MinIO, so the feature is only available in versions prior to June 1, 2022. Start minio container with gateway azure parameter:
docker run -d --name minio --network host --restart always \
-e "MINIO_ROOT_USER=azurestorageaccountname" \
-e "MINIO_ROOT_PASSWORD=azurestorageaccountkey" \
public.ecr.aws/s8z4o5f0/minio \
gateway azure
Environment variables (set in the -e
option):
# login to access Azure Blob Storage
MINIO_ROOT_USER=azurestorageaccountname
# key to access Azure Blob Storage
MINIO_ROOT_PASSWORD=azurestorageaccountkey
Storage connection string ("proctoring" bucket):
http://azurestorageaccountname:azurestorageaccountkey@127.0.0.1:9000/proctoring
3.3.5. Google Cloud Storage You can use Google Cloud Storage as storage, but you need to use MinIO Gateway to work with it. Please note that Gateway support has been removed in the latest versions of MinIO, so the feature is only available in versions prior to June 1, 2022. First you need to create an account service key for GCS and get a JSON file of connection parameters:
Start minio container with gateway gcs parameter:
docker run -d --name minio --network host --restart always \
-v /path/to/credentials.json:/credentials.json \
-e "GOOGLE_APPLICATION_CREDENTIALS=/credentials.json" \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin" \
cr.yandex/crplqlcepbdpfbg68mub/minio \
gateway gcs yourprojectid
Environment variables (set in the -e
option):
# Google Cloud Storage credentials
GOOGLE_APPLICATION_CREDENTIALS=/credentials.json
Storage connection string ("proctoring" bucket):
http://minioadmin:minioadmin@127.0.0.1:9000/proctoring
Run container recognizer:
docker run -d --name recognizer \
--network host --restart always \
-e "API_KEY=secret" \
public.ecr.aws/s8z4o5f0/recognizer
Environment variables (set in the -e
parameter):
# server IP
HOST=0.0.0.0
# server port
PORT=8080
# number of workers
THREADS=4
# API access key
API_KEY=secret
# flag for rebuilding if the processor
# does not support AVX instructions
DLIB_REBUILD=1
Recognition API connection string:
http://secret@127.0.0.1:8080/recognize
Recognition API connection string for multiple servers (for example recognizer01 and recognizer02):
http://secret@recognizer01:8080,recognizer02:8080/recognize
Run container turn:
docker run -d --name turn --network host --restart always \
public.ecr.aws/s8z4o5f0/turn
Ports used (should be available from clients):
Environment variables (set in the -e
option):
# server port (tcp and udp)
TURN_PORT=3478
# username and password
TURN_USER="webrtc:webrtc"
# external IP, determined automatically if not specified
TURN_EXTERNAL_IP=""
# minimum port
TURN_MIN_PORT=49152
# maximum port
TURN_MAX_PORT=65535
TURN connection string:
turn://webrtc:webrtc@<external-ip>:3478?transport=tcp+udp
where
To check the operation of the TURN server in the browser we can use the Trickle ICE page. To do this you need to enter the connection parameters to the server on this page:
Then click on "Add Server" and "Gather candidates". As a result, the table on this page should list the following connection types in the "Component Type" field:
If at least one type is not on the list, the access to the TURN server is configured incorrectly. Check if the connections are not blocked by the firewall or intermediate routers in the network. If the TURN check fails, then the reason may be blocking UDP traffic between the client and the server. You can check the transmission of UDP traffic between clients and the server on a specific port using the following commands:
Server
nc -u -l -p <udp-port>
Client
echo -n "ping" | nc -u -w 5 <external-ip> <udp-port>
3.6.1. Let’s Encrypt SSL certificate The system requires a valid SSL certificate for the domain. You can issue a free Let’s Encrypt SSL certificate valid for 3 months. To do this, just use the lego utility:
docker volume create acme
docker run --network host \
-v "acme:/.lego" \
goacme/lego \
--http --accept-tos --key-type=rsa2048 \
--domains=your-domain.com \
--email=admin@your-domain.com \
run
After successful execution of the command, the certificate will be available in the "/var/lib/docker/volumes/acme/_data/certificates/"
directory:
You can renew the certificate with the command:
docker run --network host \
-v "acme:/.lego" \
goacme/lego \
--domains=your-domain.com \
--email=admin@your-domain.com \
--accept-tos --http \
renew --days 14
3.6.2. Running instance The application server instance must be bound to a single domain for which a license will be issued. The application server can be run on a single node or on multiple nodes (horizontal scaling). When running the application server on multiple nodes, the servers must be on the same network and be able to communicate using the UDP protocol. The instance settings are specified in the environment variables when starting the container. The settings on all nodes should be the same except for the HOSTNAME environment variable, and it should be unique for each node.
Start the container node:
docker run -d --name node --network host --restart always \
--tmpfs /tmp \
-v "acme:/etc/acme" \
-e "DISTRIB_URL=https://files.proctoring.app/dist/proctoring-vXYZ.zip" \
-e "DISTRIB_KEY=secret" \
-e "MONGO_URI=mongodb://admin:secret@127.0.0.1:27017/proctoring?authSource=admin" \
-e "MINIO_URI=http://minioadmin:minioadmin@127.0.0.1:9000/proctoring" \
-e "RECOGNIZER_URI=http://secret@127.0.0.1:8080/recognize" \
-e
"TURN_URI=turn://webrtc:webrtc@127.0.0.1:3478?transport=tcp+udp" \
-e "SESSION_KEY=secret" \
public.ecr.aws/s8z4o5f0/node
Environment variables (set in the -e
parameter):
# proctoring system distribution kit URL
DISTRIB_URL=https://files.proctoring.app/dist/proctoring-vX_Y_Z.zip
# distribution kit password (issued separately)
DISTRIB_KEY=secret
# server IP
HOST=0.0.0.0
# server port
PORT=443
# port forwarding from http to https
HTTP_PORT=80
# server response timeout
HTTP_TIMEOUT=60
# directory for /.well-known/acme-challenge
HTTP_WEBROOT=/etc/acme/webroot
# number of threads, by default equal to the number of vCPUs
THREADS=2
# MongoDB connection string
MONGO_URI=mongodb://admin:secret@127.0.0.1:27017/proctoring?authSource=admin
# MinIO/S3 connection string
MINIO_URI=http://minioadmin:minioadmin@127.0.0.1:9000/proctoring
# recognition API connection string
RECOGNIZER_URI=http://secret@127.0.0.1:8080/recognize
# TURN Server connection string
TURN_URI=turn://webrtc:webrtc@127.0.0.1:3478?transport=tcp+udp
# secret key for sessions (random character set)
SESSION_KEY=secret
# session lifetime in minutes
SESSION_EXPIRES=180
# private key (content or file path)
SSL_KEY=/etc/acme/certificates/your-domain.com.key
# SSL certificate (content or file path)
SSL_CERT=/etc/acme/certificates/your-domain.com.crt
# certification authority bundle (content or file path)
SSL_CA=/etc/acme/certificates/your-domain.com.issuer.crt
# limit on the size of uploaded files in MiB
UPLOAD_LIMIT=100
# storage time of unused downloaded files in minutes
UPLOAD_EXPIRES=60
# time of caching statics on the client in minutes
STATIC_EXPIRES=60
# directory of static files
STATIC_DIR=/path/to/dir
# manager login
MANAGER_USERNAME=manager
# manager password
MANAGER_PASSWORD=changeme
# host (and port, if non-standard) to login under the manager role
MANAGER_HOST=localhost:3000
# password complexity with a regular expression
# minimum 12 chars, at least one uppercase, one lowercase, one number
PASSWORD_RULE=^(?=.[0-9])(?=.[a-z])(?=.*[A-Z]).{12,}$
# number of unsuccessful authentication attempts
PASSWORD_ATTEMPTS=5
# lock account after maximum login attempts for N minutes
PASSWORD_HOLDOFF=5
# maximum depth of old password history for checking used passwords
PASSWORD_HISTORY=5
Usually the following settings are not required. However, if you are already using a reverse proxy server, you may need to configure the proctoring system to work through it.
Access to the system from the Internet can be configured through the proxy server Nginx, you can install it in Ubuntu as follows:
sudo apt-get install -y nginx
Virtual host configuration in nginx:
location / {
client_max_body_size 100m;
proxy_pass http://127.0.0.1:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header Origin $http_origin;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-By $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
Installing Apache Web Server on Ubuntu:
sudo apt-get install -y apache2
Apache 2.4 virtual host configuration:
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/socket.io [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:3000/$1 [P,L]
<Location "/">
# RequestHeader set Host "your-proctoring-domain"
ProxyPreserveHost On
ProxyPass "http://127.0.0.1:3000/"
ProxyPassReverse "http://127.0.0.1:3000/"
</Location>
Apache must have mod_proxy, mod_proxy_http, mod_proxy_wstunnel, and mod_rewrite modules enabled.
Below are the actions that you have to perform consistently to update the version of the proctoring system. It is not always necessary to update all components. If only the application server has been updated, then it is sufficient to update only the node
container.
Keep in mind that upgrading individual system components may require additional migration actions.
5.1.1. Upgrade MongoDB Our Docker Registry contains three MongoDB versions (tags):
Before upgrading to the next version of MongoDB, you should change the compatibility version in your database:
Connect to database by client (e.g. mongosh) docker exec -it mongo mongosh -u admin -p secret --authenticationDatabase admin
To set or update “featureCompatibilityVersion”, run the following command: db.adminCommand( { setFeatureCompatibilityVersion: "5.0" } )
See more details in the official upgrade guide.
5.1.2. Upgrade MinIO Upgrading MinIO from “RELEASE.2022-04-16T04-26-02Z” to “latest” requires performing a data migration. To do that, you need to:
public.ecr.aws/s8z4o5f0/minio:latest
See more details in the official migration guide.
5.1.3. Upgrade Recognizer
You should use the "latest" tag of the Recognizer Docker image (public.ecr.aws/s8z4o5f0/recognizer:latest
) with the proctoring system v4.13 and higher. For older versions you should use the "old" tag.
To back up the database:
docker exec <mongo_container> mongodump -u <user> -p <password> --authenticationDatabase admin -d <db> --archive --gzip > /path/to/backup/db.dump
where <mongo_container>
is the name of the "mongo" container, which can be viewed using the command docker ps
;
<user>
is the username of the database user;
<password>
is the DB user password; <db>
is the name of the database ("proctoring" by default);
/path/to/backup/db.dump
is the dump file.
If the mongo
image has not been updated, then this step is not necessary.
To back up the file storage:
wget -O /usr/local/bin/mc https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x /usr/local/bin/mc
mc alias set local http://127.0.0.1:9000 <access_key> <access_secret>
mc mirror local/proctoring /path/to/backup/s3
where <access_key>
is the storage access key;
<access_secret>
is the storage access password; proctoring
is the name of the proctoring bucket;
/path/to/backup/s3
is the directory for saving the storage backup.
If the minio
image has not been updated, then this step is not necessary.
Stop and remove all containers, remove all images:
docker compose down
docker image prune
Start a new installation:
docker compose up -d
Read the specified container log:
docker logs -f <container_name>
where <container_name>
is the name of the container, which can be viewed using the command docker ps
.
To restore the database:
cat /path/to/backup/db.dump | docker exec -i <mongo_container> mongorestore -u <user> -p <password> --authenticationDatabase admin --archive --gzip
where <mongo_container>
is the name of the "mongo" container, which can be viewed using the command docker ps
;
<user>
is the DB user login;
<password>
is the DB user password;
/path/to/backup/db.dump
is the dump file.
If you have not made a backup copy of the database, then this step should be skipped.
To restore the file storage data:
mc alias set local http://127.0.0.1:9000 <access_key> <access_secret>
mc mirror /path/to/backup/s3/ local/proctoring
where <access_key>
is the storage access key;
<access_secret>
is the storage access password; proctoring
is the name of the proctoring bucket;
/path/to/backup/s3/
is the directory for saving the storage backup.
If you have not made a backup copy of the file storage, then this step should be skipped.
The system can be accessed at the address specified during deployment (for example, https://your-server-domain). Authorization parameters for the manager by default are as follows:
After the first login under the manager, you need to add a host configuration based on a license key that is issued for a specific domain, term and volume (issued by supplier).