Workflow
Make sure you have been through the local
set-up page and followed all the steps before working through this workflow page.
Emulating the AWS Lambda service
Leaving docker stats
running in a terminal is often useful to see which containers are running and how much work they’re doing.
Adding the --debug
flag to the end of sam local
commands can be helpful.
We have tried these launch configurations. The fact that you are expected to add your environment variables in yet another place, specify your runtime again, and add your event again, along with the fact that some environment variables would override and some wouldn’t was enough to make us stick with our existing VS Code launch configurations.
Testing the provisionAppEmissaries
-
Terminal 1: Run docker-compose-ui from the
purpleteam-s2-containers/
root directory. Run the following command:
docker run --name docker-compose-ui -v $(pwd):$(pwd) -w $(dirname $(pwd)) -p 5000:5000 --rm --network compose_pt-net -v /var/run/docker.sock:/var/run/docker.sock francescou/docker-compose-ui:1.13.0
-
Terminal 2: Host Lambda function. from the
purpleteam-lambda/
root directory usingsam local start-lambda
. Run the following command:sam local start-lambda --host 172.25.0.1 --env-vars local/env.json --docker-network compose_pt-net
-
Terminal 3: Invoke
provisionAppEmissaries
from aws cli. From thepurpleteam-lambda/
root directory, start 10 containers:aws lambda invoke --function-name "provisionAppEmissaries" --endpoint-url "http://172.25.0.1:3001" --no-verify-ssl --payload '{"provisionViaLambdaDto":{"items": [{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""}]}}' local/app-emissary-provisioner/out.txt
Just be aware on your first run that the stage two images are going to be fetched, so it won’t be instant
-
If you haven’t already got
docker stats
running, verify that the containers were started with the following command:docker container ls
-
Bring containers down with one of the following options:
- The following command from the
purpleteam-s2-containers/app-emissary/
directory:docker-compose down
- Using the
deprovisionS2Containers
Lambda as seen below - Possibly the easiest way: using the docker-compose-ui UI as discussed in the last step of the Full system run below
- The following command from the
Testing the provisionSeleniumStandalones
-
Terminal 1: Run docker-compose-ui from the
purpleteam-s2-containers/
root directory. Run the following command:docker run --name docker-compose-ui -v $(pwd):$(pwd) -w $(dirname $(pwd)) -p 5000:5000 --rm --network compose_pt-net -v /var/run/docker.sock:/var/run/docker.sock francescou/docker-compose-ui:1.13.0
-
Terminal 2: Host Lambda function. from the
purpleteam-lambda/
root directory usingsam local start-lambda
. Run the following command:sam local start-lambda --host 172.25.0.1 --env-vars local/env.json --docker-network compose_pt-net
-
Terminal 3: Invoke
provisionSeleniumStandalones
from aws cli. From thepurpleteam-lambda/
root directory, start 10 containers:aws lambda invoke --function-name "provisionSeleniumStandalones" --endpoint-url "http://172.25.0.1:3001" --no-verify-ssl --payload '{"provisionViaLambdaDto":{"items": [{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"firefox", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"lowPrivUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""},{"testSessionId":"adminUser", "browser":"chrome", "appEmissaryContainerName":"", "seleniumContainerName":""}]}}' local/selenium-standalone-provisioner/out.txt
Just be aware on your first run that the stage two images are going to be fetched, so it won’t be instant
-
If you haven’t already got
docker stats
running, verify that the containers were started with the following command:docker container ls
-
Bring containers down with one of the following options:
- The following command from the
purpleteam-s2-containers/selenium-standalone/
directory:docker-compose down
- Using the
deprovisionS2Containers
Lambda as seen below - Possibly the easiest way: using the docker-compose-ui UI as discussed in the last step of the Full system run below
- The following command from the
Testing the deprovisionS2Containers
- Terminal 1: Run docker-compose-ui from the
purpleteam-s2-containers/
root directory. Run the following command:docker run --name docker-compose-ui -v $(pwd):$(pwd) -w $(dirname $(pwd)) -p 5000:5000 --rm --network compose_pt-net -v /var/run/docker.sock:/var/run/docker.sock francescou/docker-compose-ui:1.13.0
- Terminal 2: Host Lambda function. from the
purpleteam-lambda/
root directory usingsam local start-lambda
. Run the following command:sam local start-lambda --host 172.25.0.1 --env-vars local/env.json --docker-network compose_pt-net
- Terminal 3: Invoke
deprovisionS2Containers
from aws cli. From thepurpleteam-lambda/
root directory, run the following command:aws lambda invoke --function-name "deprovisionS2Containers" --endpoint-url "http://172.25.0.1:3001" --no-verify-ssl --payload '{"deprovisionViaLambdaDto":{"items": ["app-emissary", "selenium-standalone"]}}' local/s2-deprovisioner/out.txt
- If you haven’t already got
docker stats
running, verify that the containers were brought down with the following command:docker container ls
Testing Lambda function directly
No local Lambda service needs to be running first, in our case:
- Run docker-compose-ui as already discussed
- Then simply run one of the following
sam local invoke
commands:- For
provisionAppEmissaries
, from thepurpleteam-lambda/
root directory, run the following command:echo '<same-JSON-payload-as-above>' | sam local invoke --event - --env-vars local/env.json --docker-network compose_pt-net provisionAppEmissaries
- For
provisionSeleniumStandalones
, from thepurpleteam-lambda/
root directory, run the following command:echo '<same-JSON-payload-as-above>' | sam local invoke --event - --env-vars local/env.json --docker-network compose_pt-net provisionSeleniumStandalones
- For
- Then check that conatiners are running as already discussed
- Bring them down again as already discussed
Debugging Lambda function directly
- Run docker-compose-ui as already discussed
- Then simply run one of the following commands:
- For
provisionAppEmissaries
, from thepurpleteam-lambda/
root directory, run the following command:echo '<same-JSON-payload-as-above>' | sam local invoke --debug-port 5858 --env-vars local/env.json --event - --docker-network compose_pt-net provisionAppEmissaries
- For
provisionSeleniumStandalones
, from thepurpleteam-lambda/
root directory, run the following command:echo '<same-JSON-payload-as-above>' | sam local invoke --debug-port 5858 --env-vars local/env.json --event - --docker-network compose_pt-net provisionSeleniumStandalones
- For
- Debug in VS Code
- Then check that conatiners are running as already discussed
- Bring them down again as already discussed
Debugging
Back End
Lambda app-emissary-provisioner
In VS Code:
We used a similar launch.json as mentioned here
Open the Lambda handler and put a break point where you want it to stop.
The VS Code instance needs to be restarted after any changes to the launch.json.
-
Terminal 1: Run docker-compose-ui so that the Lambda functions are able to request that Stage Two containers be brought up and down
-
Terminal 2: Run the following
sam local start-lambda
command from thepurpleteam-lambda/
root directory:sam local start-lambda --host 172.25.0.1 --env-vars local/env.json --docker-network compose_pt-net --debug-port 5858
-
Terminal 3: Now run the
aws lambda invoke
command as discussed previously from thepurpleteam-lambda/
root directory:aws lambda invoke --function-name "provisionAppEmissaries" --endpoint-url "http://172.25.0.1:3001" --no-verify-ssl --payload '<same-JSON-payload-as-above>' local/app-emissary-provisioner/out.txt
Just be aware that if the stage two containers have not yet been fetched that it will take some time to do so
-
In VS Code:
Click on theapp-emissary-provisioner
folder, switch to debug (Run) view, click the “app-emissary-provisioner (purpleteam-lambda)” project in the drop-down, click debug arrow
The process is similar for the other Lambda functions.
app-scanner and sub-processes
Using VS Code, we have created a launch.json to use for this.
Make sure you have the app-scanner project loaded in VS Code.
- Terminal 1: Run docker-compose-ui so that the Lambda functions are able to request that Stage Two containers be brought up and down
- Terminal 2: Run the
sam local start-lambda
command as detailed in the Full system run - Make sure you have your SUT running in a clean state unless you specifically want otherwise, and your Job has it’s contact details for PurpleTeam CLI to send to the orchestrator
- Terminal 3: Build the orchestrator and Testers (app-scanner for debug), this step only needs to be done for the first app-scanner debug session and after that if you make any code changes. From the
purpleteam-orchestrator/
root directory:npm run dc-build-debug-app
- Terminal 3: Bring up the orchestrator and app-scanner with the following command from the same root directory:
npm run dc-up-debug-app
- VS Code: switch to debug (Run) view, click on the debug drop-down and select “Docker: Attach to Node (purpleteam-app-scanner)” and click the Run button. You should notice that the terminal running the orchestrator and app-scanner will inform you “Debugger attached”, and VS Code should have broken on the first line of code in the app-scanner. At this point you are ready to step through the app-scanner and pause on any break-points you may add
- In order to reach the bulk of the app-scanner code, you will need to start the PurpleTeam CLI. If you want to step through the Test Session sub-processes, it’s often useful to put a break-point in app.cuc.js on the line that logs which PID has been assigned to which Test Session (at the time of writing this, this is in the
runTestSession
routine). This is useful information for taking through the rest of your session to help correlate the app-scanner logs - Now you can debug into each Cucumber sub-process, currently we have two defined in the launch.json file (you can add more if you need). Click on the debug drop-down again and select “Docker: Attach to child_process [n] (purpleteam-app-scanner)” and click the Run button. VS Code should break on the first line of code in one of the Cucumber processes. To check which process you are in, just execute the
process.id
command in the VS Code Debug Console, this will give you a PID, which you can correlate with the PID you captured from the app-scanner log in the previous step, along with the Test Session Id. From here if you want to debug the other Test Session, just click on it in the Call Stack pane, and check theprocess.id
again. This way you can always know which Test Session you are debugging.
Other Testers
Follow the same approach as with the app-scanner, although there is only a single sub-process launched for the embedded Emissaries which makes debugging even easier.
orchestrator
The process is similar to the Testers except there is no sub-process to be concerned about.
Front End
Discussed here
CLI
Assuming you have set the relevant environment variables detailed for the CLI, from the purpleteam root directory you can run the following command:
npm run debug
NODE_ENV
environment variable to local
, then run:
NODE_ENV=local npm run debug
Or if you actually want to exercise say the test
command, run the following command:
NODE_ENV=local npm run debug test
In Chromium, open chrome://inspect
and click “inspect”, which will drop you into the first loaded script.
Tests
From the purpleteam root directory you can run the following command:
npm run test:debug
Or if you need to override the NODE_ENV
environment variable to local
, then run:
NODE_ENV=local npm run test:debug
In Chromium, open chrome://inspect
and click “inspect”, which will drop you into the first loaded script.
Also review the run options detailed in the CLI.
Full system run
Leaving docker stats
running in a terminal is often useful to see which containers are running and how much work they’re doing.
docker container ls
is also quite useful for watching port allocations.
-
docker-compose-ui needs to be running. Running it in it’s own terminal is a good idea to see what is happening.
You need to use the user-defined network already created in order for Lambda function in (what was previously docker-lambda) the AWS managed Docker Image to be able to send requests to
docker-compose-ui
. From thepurpleteam-s2-containers/
root directory, run the following command:docker run --name docker-compose-ui -v $(pwd):$(pwd) -w $(dirname $(pwd)) -p 5000:5000 --rm --network compose_pt-net -v /var/run/docker.sock:/var/run/docker.sock francescou/docker-compose-ui:1.13.0
Additional resources:
- docker compose ui API
- Once running,
http://localhost:5000/api/v1/projects
will list your projects - The following command will start two containers defined by the
chrome
service in thepurpleteam-s2-containers/selenium-standalone/docker-compose.yml
file:curl -X PUT http://localhost:5000/api/v1/services --data '{"service":"chrome","project":"selenium-standalone","num":"2"}' -H 'Content-type: application/json'
- Verify with either
docker stats
ordocker container ls
- Then bring the
purpleteam-s2-containers/selenium-standalone
docker-compose.yml
down
- Verify with either
-
Host Lambda functions:
From the
purpleteam-lambda/
root directory run the followingsam local start-lambda
command. Running it in it’s own terminal is a good idea to see what is happening:sam local start-lambda --host 172.25.0.1 --env-vars local/env.json --docker-network compose_pt-net
The
--host [gateway IP address of compose_pt-net]
is required to bind sam local to the user-defined bridge networkcompose_pt-net
in order for it to be reachable from the app-scanner container.
The following are the links that were useful for working this out:host.docker.internal
,extra_hosts
and other comments from here down, docker-host container,extra_hosts
reference, along with creating the firewall rule as mentioned above, and testing connectivity as mentioned in the Docker page with shelling into running container -
Start your SUT (NodeGoat in this example). Running it in it's own terminal is a good idea to see what is happening:
There are at least two options as mentioned in the set-up
-
Run the docker-compose to bring the stage one containers up. Running the following commands for this step in their own terminal is a good idea to see what is happening:
- Standard:
- To build/rebuild images after code changes, from the
purpleteam-orchestrator/
root directory, run the following command:npm run dc-build
- From the
purpleteam-orchestrator/
root directory, run the following command:npm run dc-up
- To build/rebuild images after code changes, from the
- Debug (orchestrator or any of the Testers. For this example we demo orchestrator, for app-scanner just swap the
orchestrator
withapp
):- To build/rebuild images after code changes, from the
purpleteam-orchestrator/
root directory, run the following command:npm run dc-build-debug-orchestrator
- From the
purpleteam-orchestrator/
root directory, run the following command:npm run dc-up-debug-orchestrator
- Now you can attach to the
purpleteam-orchestrator
, orpurpleteam-app-scanner
process within the container. Further details in the Debugging section
- Now you can attach to the
- To build/rebuild images after code changes, from the
- Standard:
-
Start cli:
If you are running in the
local
environment and this is the first time you are doing this on a given machine, beware that the stage two images will take some time to fetch. The terminal that you have run docker-compose-ui in will be visibly retrieving these images. There are some things that can go wrong:- The app-scanner may timeout while testing connectivity of the yet to be running stage two containers. If this happens a timeout
error
message will be logged in the terminal you rannpm run dc-up
in - We have also seen a
Missing region in config
Error
logged. If you have configured the aws cli correctly then this is a red herring
On subsequent runs you should not see the above mentioned issues. If this concerns you, the timeout (found in the app-scanner) can be increased.
Depending on whether you are testing against a local copy of your containerised app or a copy hosted on the Internet will determine how you configure the cli (purpleteam). This example demonstrates the two options mentioned in step 3.
By specifying thelocal
environment, you are instructing the PurpleTeam CLI to use it’sconfig/config.local.json
, and communicate with the PurpleTeam back-end that you have already set-up and have running locally (step 4). If using thecloud
, the back-end is all taken care of for you. You will also need to specify the location of the SUT in the Job that you provide to the CLI. Examples of these can be found in thetestResources/jobs
directory:-
Locally cloned copy of NodeGoat:
The SUT details in your Job will be as follows:"sutIp": "pt-sut-cont", "sutPort": 4000, "sutProtocol": "http",
-
NodeGoat running on the Internet via purpleteam-iac-sut:
The SUT details in your Job will be as follows, with<your-domain-name.com>
replaced with a domain you have set-up:"sutIp": "nodegoat.sut.<your-domain-name.com>", "sutPort": 443, "sutProtocol": "https",
Assuming you have set the relevant environment variables detailed for the CLI, from the purpleteam root directory, run the following command:
npm start test
- The app-scanner may timeout while testing connectivity of the yet to be running stage two containers. If this happens a timeout
-
Once the test run has finished, you can check to make sure the Stage Two containers have been brought down. If you are not using
docker stats
:- In one terminal run the following command:
docker container ls
If they haven’t been brought down:
- In another terminal from the
purpleteam-s2-containers/app-emissary/
directory run the following command:docker-compose down
- In another terminal from the
purpleteam-s2-containers/selenium-standalone/
directory run the following command:docker-compose down
- In another terminal from the
If you want to keep the Stage Two containers running after a test run to inspect for any reason, simply change the app-scanner’s
emissary.shutdownEmissariesAfterTest
config value tofalse
, rebuild the container and run.If the Stage Two containers have not been brought down, you can also shut down the Stage Two containers with the docker-compose-ui UI. When it’s running, you can browse to
http://localhost:5000
, select the specific Stage Two container project and click the Down button, this is convenient as it brings all of the specific Stage Two containers down with one click. - In one terminal run the following command:
Useful resources
- Dockerising all of your components