HTTP REST API to create, view, update, and delete small data "fragments" like text and images. System stores fragment data and metadata associated with it like its size, type, and creation/modification dates. All operations require proper authorization (account), as fragments are specific to the user. Additionally, it gives options to convert fragment data between different formats. For example, a Markdown fragment can be retrievable as HTML, a JPEG as a PNG, etc. System is fully tested and deployed to AWS and is available scaling in order to store massive amounts of data.
Development Setup Commands
- General Commands
- Docker (with example)
- Dockerfile Optimizations
- EC2 Environment & Docker on EC2 (with example)
- Amazon Elastic Container Registry
Testing Endpoints
- Available Routes
- Supported file types
- Get Server Responses in Terminal using Curl
- Expected Server Responses
- Testing and Code Coverage
npm start
Start server
npm run dev
Start the server in development mode** (with nodemon and logger in debug mode)
npm run debug
Start the server in debug mode** (with nodemon, logger in debug mode and launched debugger in VSCode (launch.json))
If the debugger doesn't start (doesn't stop at breakpoints, etc)
If there are any breakpoints set up, application should hit them.If that has not happened, the problem might be that Auto Attach is not enabled and debugger doesn't get attached:
- In VSCode do `Ctrl+Shift+P`
- Find and select `Debug: Toggle Auto Attach`
- Select `Only With Flag` that will attach debugger when it sees "--inspect" flag like it's set up in `npm run debug` script.
npm run lint
Check codebase for errors and enforce coding standards.
hadolint Dockerfile
Check Dockerfile for errors and enforce best practices.
npm run coverage
Run all tests and generate code coverage report.
chmod +x ./scripts/local-aws-setup.sh
Make local-aws-setup.sh file (./scripts/local-aws-setup.sh) executable
npm version <version> -m "Release v<version>"
Update package.json version
git push origin master --tags
Push tag
Authentication
docker login --username <username> --password "<password>
Build docker image
`-t fragments:latest` is a tag with name (fragments) and version (latest)
docker build -t fragments:latest .
View created image
docker image ls fragments
Run docker container
Tag Meanings
- `fragments` is container name
- `--env-file <.env>` adds environmental variables from local .env file
- `-p 8080:8080` binds local 8080 port to docker machine's 8080 port (left 8080 - host/local machine; right 8080 - container)
- `fragments:latest` is image name
More options
- To have signals from tini, add `--init` tag after `docker run` NOTE: `--init` won't work on alpine images
- i.e. `docker run --init --rm --name fragments --env-file env.jest -p 8080:8080 fragments:latest`
- To detach container (daemon, run in background), add `-d` flag
- i.e. `docker run --rm --name fragments --env-file env.jest -p 8080:8080 -d fragments:latest`
- it will detach container and as output will print the id of that container
- to view logs for the detached container run `docker logs -f detached-container-id` (`-f` is for following the logs, can be run without)
- To overwrite environmental variables values, add `-e` tag with key=value
- i.e. `docker run --rm --name fragments --env-file env.jest -e LOG_LEVEL=debug -p 8080:8080 fragments:latest`
- it will discard `LOG_LEVEL` value in env.jest and set it's value to `debug`
docker run --rm --name <fragments> --env-file <.env> -p 8080:8080 <fragments:latest>
Above is the screenshot of fragments microservice running as a Docker container in detached mode. Using port 5555 on host for port 8080 in the container. Then, using curl to hit the server's health check route inside the container.
Run docker compose
Additional Tags Available
- use `-d` tag to run service(s) in the background (i.e. `docker compose up -d`)
- use `down` to stop service(s) `docker compose down`
Changed codebase? - Re-build image (if changes are made to source code), use `--build` flag to force a rebuild
- `docker compose up --build`
- `docker compose up --build -d` (re-build in background)
docker compose up
Push to DockerHub
- Create a Tag
docker tag <fragments>:<latest> <mdmytrenko/fragments>:<latest>
- Make a push
docker push <mdmytrenko/fragments>
Available Options
- `mdmytrenko/fragments` image name
- if tag is omitted `:latest` tag is used by default as `mdmytrenko/fragments:latest`
- to push all existing tags add `--all-tags` tag, run `docker push --all-tags mdmytrenko/fragments`
Remove image (locally)
docker rmi <image-name>
How is Dockerfile currently optimized?
- Use of Alpine Linux with the explicit image sha version:
FROM node:18.17.0-alpine3.17@sha256:e0641d0ac1f49f045c8dc05bbedc066fc7c88bc2730ead423088eeb0788623a1
- Installation of only production dependencies on docker image with
RUN npm ci --production
, as docker container doesn't need devDependencies. - Downgrading user’s rights (from root user) with
USER node
for security purposes (attacker cannot use root user privileges if they get access to the container). - Use of tini
ENTRYPOINT ["tini", "--"]
to control signals in order to terminate the docker container when necessary. Since Alpine version doesn’t have tini pre-installed, tini is added to the image withRUN apk add --no-cache tini
. - Healthcheck was added, so that container knows if the application runs properly or not.
HEALTHCHECK --interval=3m --retries=3 \ CMD curl --fail http://localhost:${PORT}/ || exit 1
Eventually, with all of the above implementations, the container was optimized and went from 1.15GB to 186.76MB in size.
Start & Stop EC2 instances from AWS command line
- Start:
aws ec2 start-instances --instance-ids <instance-id>
- Stop:
aws ec2 stop-instances --instance-ids <instance-id>
Connect with PuTTY
- Session -> Host Name set to
Public IPv4 address
(something like 54.165.10.190), check port to be22
- Connection -> Seconds between keepalives set to
30
- Connection -> SSH -> Auth -> Credentials -> Private key file for authentication select file
dps955-fragments-key-pair.ppk
(in fragments/.ssh folder) - Login as
ec2-user
Setup CentOS with necessary tools
- Update system's packages with
sudo yum update
- Install package (git as the example)
sudo yum install git -y
- Check package version (git as the example)
git --version
- Switch between node versions
nvm use --lts
ornvm use 16
(version 16 is installed) ornvm use 14
(version 14 is installed)
Copy source code from local machine to EC2
-
On Local Machine: Run
npm pack
-
On Local Machine: Run
- `-v` for verbose (to give detailed explanation, especially if something goes wrong)
- `-i .ssh/key-pair-file.ppk` key-pairs for connection
- `fragments-1.0.0.tgz` update with newer version if applicable
- `ec2-user` username (without it automatically guesses username & refuses key)
- `ec2-54-165-10-190.compute-1.amazonaws.com` example of remote computer address (check Public IPv4 DNS)
- `-P 22` if some still arise add this flag to force it connect on port 22
pscp -v -i .ssh/<key-pair-file.ppk> <fragments-1.0.0.tgz> ec2-user@<ec2-54-165-10-190.compute-1.amazonaws.com>:
-
On Remote Machine: Run
tar -xvzf <fragments-0.0.1.tgz>
to unpack -
On Remote Machine: Run
cd package
to get into unpacked folder -
On Remote Machine: Run
nvm use 14
prior to npm installing for fragments-ui, otherwise it might get frozen.
Docker on EC2
- Install Docker
sudo yum install -y docker
(might need to reload the ssh sessionexit
) - Start Docker
sudo dockerd
- Pack, transfer & unpack codebase (refer to commands in the sections above)
- Run
npm install
(will generate package-lock.json) - Build image
sudo docker build -t <fragments>:<latest> .
- Run other commands needed to run docker container (add
sudo
for root rights)
Running server as a Docker container in detached mode on EC2. Using port 8080 for both host and the container.
Example of Docker Container Testing on EC2
Using the browser to hit the EC2 server's health check route inside the docker container.
Docker logs on the EC2 instance show request information after EC2 server's health check route/
is visited via browser (screenshot above)
Pull Docker Images from ECR
- On the EC2 instance login the docker client:
$ export AWS_ACCESS_KEY_ID=<access-key-id>
$ export AWS_SECRET_ACCESS_KEY=<secret-access-key>
$ export AWS_SESSION_TOKEN=<session_token>
$ export AWS_DEFAULT_REGION=us-east-1
# Login the EC2's docker client, swapping your full ECR registry name
# Make sure docker is running! (sudo dockerd)
$ sudo docker login -u AWS -p $(aws ecr get-login-password --region us-east-1) 390240750368.dkr.ecr.us-east-1.amazonaws.com
- Pull Image
sudo docker pull 390240750368.dkr.ecr.us-east-1.amazonaws.com/fragments:vtag
- Run Pulled Image
sudo docker run --rm --name fragments --env-file .env -p 8080:8080 390240750368.dkr.ecr.us-east-1.amazonaws.com/fragments:vtag
Route | Method | Authentication | What it does | Query Options |
---|---|---|---|---|
GET | / |
not required | Server health check | |
GET | /v1/fragments |
required | Get all user's fragments | setting ?expand=1 will return fragments in the expanded form with metadata |
GET | /v1/fragments/:id |
required | Get specific user fragment | |
GET | /v1/fragments/:id/info |
required | Get specific user fragment in the expanded form with metadata | |
POST | /v1/fragments |
required | Create fragment for the user | |
PUT | /v1/fragments/:id |
required | Update specific user fragment | |
DELETE | /v1/fragments/:id |
required | Delete specific user fragment |
Name | Type | Extension | Supported Conversion Extensions |
---|---|---|---|
Plain Text | text/plain |
.txt |
.txt |
Markdown | text/markdown |
.md |
.md , .html , .txt |
HTML | text/html |
.html |
.html , .txt |
JSON | application/json |
.json |
.json , .txt |
PNG Image | image/png |
.png |
.png , .jpg , .webp , .gif |
JPEG Image | image/jpeg |
.jpg |
.png , .jpg , .webp , .gif |
WebP Image | image/webp |
.webp |
.png , .jpg , .webp , .gif |
GIF Image | image/gif |
.gif |
.png , .jpg , .webp , .gif |
Simple fetch
curl.exe http://localhost:8080
Simple fetch (more readable response)
curl.exe -s localhost:8080 | jq
Expected Output
{"status":"ok","author":"<author>","githubUrl":"<github-repo>","version":"<package.json-version>", "hostname": "<hostname>"}
Fetch for user's fragments
curl.exe -i -u user1@email.com:password1 http://localhost:8080/v1/fragments
Add new fragments for user
curl.exe -i -X POST -u <user@email.com>:<password> -H "Content-Type: <text/plain>" -d "<This is a fragment>" http://localhost:8080/v1/fragments
Expected Health Check Route Response
{
"status": "ok",
"author": "<author>",
"githubUrl": "<github-repo>",
"version": "<package.json-version>",
"hostname": "<hostname>"
}
Successful Response
Basic Successful Response
{
"status": "ok"
}
Successful Response that returns data
{
"status": "ok",
"fragment": {
"id": "<fragment-id>",
"ownerId": "<owner-id>",
"created": "<date-of-creation>",
"updated": "<date-of-modification>",
"type": "<fragment-type>",
"size": <fragment-size>
}
}
Error Response
{
"status": "error",
"error": {
"code": <error-status-code>,
"message": "<error-message>"
}
}
Tesing is extensively used in this application. Integration tests utilize hurl
to test response from different routes. Unit tests are implemented using npm supertest
package. Code coverage report shows that over 85% of the codebase has tests.